* WIP: new workflow architecture * WIP: further improvements and cleanup * WIP: cleanup and docks, replace old tool with new * WIP: cleanup and docks, replace old tool with new * WIP: new planner implementation using workflow * WIP: precommit tool working as a workflow instead of a basic tool Support for passing False to use_assistant_model to skip external models completely and use Claude only * WIP: precommit workflow version swapped with old * WIP: codereview * WIP: replaced codereview * WIP: replaced codereview * WIP: replaced refactor * WIP: workflow for thinkdeep * WIP: ensure files get embedded correctly * WIP: thinkdeep replaced with workflow version * WIP: improved messaging when an external model's response is received * WIP: analyze tool swapped * WIP: updated tests * Extract only the content when building history * Use "relevant_files" for workflow tools only * WIP: updated tests * Extract only the content when building history * Use "relevant_files" for workflow tools only * WIP: fixed get_completion_next_steps_message missing param * Fixed tests Request for files consistently * Fixed tests Request for files consistently * Fixed tests * New testgen workflow tool Updated docs * Swap testgen workflow * Fix CI test failures by excluding API-dependent tests - Update GitHub Actions workflow to exclude simulation tests that require API keys - Fix collaboration tests to properly mock workflow tool expert analysis calls - Update test assertions to handle new workflow tool response format - Ensure unit tests run without external API dependencies in CI 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * WIP - Update tests to match new tools * WIP - Update tests to match new tools * WIP - Update tests to match new tools * Should help with https://github.com/BeehiveInnovations/zen-mcp-server/issues/97 Clear python cache when running script: https://github.com/BeehiveInnovations/zen-mcp-server/issues/96 Improved retry error logging Cleanup * WIP - chat tool using new architecture and improved code sharing * Removed todo * Removed todo * Cleanup old name * Tweak wordings * Tweak wordings Migrate old tests * Support for Flash 2.0 and Flash Lite 2.0 * Support for Flash 2.0 and Flash Lite 2.0 * Support for Flash 2.0 and Flash Lite 2.0 Fixed test * Improved consensus to use the workflow base class * Improved consensus to use the workflow base class * Allow images * Allow images * Replaced old consensus tool * Cleanup tests * Tests for prompt size * New tool: docgen Tests for prompt size Fixes: https://github.com/BeehiveInnovations/zen-mcp-server/issues/107 Use available token size limits: https://github.com/BeehiveInnovations/zen-mcp-server/issues/105 * Improved docgen prompt Exclude TestGen from pytest inclusion * Updated errors * Lint * DocGen instructed not to fix bugs, surface them and stick to d * WIP * Stop claude from being lazy and only documenting a small handful * More style rules --------- Co-authored-by: Claude <noreply@anthropic.com>
216 lines
8.9 KiB
Python
216 lines
8.9 KiB
Python
"""
|
|
Chat tool - General development chat and collaborative thinking
|
|
|
|
This tool provides a conversational interface for general development assistance,
|
|
brainstorming, problem-solving, and collaborative thinking. It supports file context,
|
|
images, and conversation continuation for seamless multi-turn interactions.
|
|
"""
|
|
|
|
from typing import TYPE_CHECKING, Any, Optional
|
|
|
|
from pydantic import Field
|
|
|
|
if TYPE_CHECKING:
|
|
from tools.models import ToolModelCategory
|
|
|
|
from config import TEMPERATURE_BALANCED
|
|
from systemprompts import CHAT_PROMPT
|
|
from tools.shared.base_models import ToolRequest
|
|
|
|
from .simple.base import SimpleTool
|
|
|
|
# Field descriptions matching the original Chat tool exactly
|
|
CHAT_FIELD_DESCRIPTIONS = {
|
|
"prompt": (
|
|
"You MUST provide a thorough, expressive question or share an idea with as much context as possible. "
|
|
"Remember: you're talking to an assistant who has deep expertise and can provide nuanced insights. Include your "
|
|
"current thinking, specific challenges, background context, what you've already tried, and what "
|
|
"kind of response would be most helpful. The more context and detail you provide, the more "
|
|
"valuable and targeted the response will be."
|
|
),
|
|
"files": "Optional files for context (must be FULL absolute paths to real files / folders - DO NOT SHORTEN)",
|
|
"images": (
|
|
"Optional images for visual context. Useful for UI discussions, diagrams, visual problems, "
|
|
"error screens, or architectural mockups. (must be FULL absolute paths to real files / folders - DO NOT SHORTEN - OR these can be bas64 data)"
|
|
),
|
|
}
|
|
|
|
|
|
class ChatRequest(ToolRequest):
|
|
"""Request model for Chat tool"""
|
|
|
|
prompt: str = Field(..., description=CHAT_FIELD_DESCRIPTIONS["prompt"])
|
|
files: Optional[list[str]] = Field(default_factory=list, description=CHAT_FIELD_DESCRIPTIONS["files"])
|
|
images: Optional[list[str]] = Field(default_factory=list, description=CHAT_FIELD_DESCRIPTIONS["images"])
|
|
|
|
|
|
class ChatTool(SimpleTool):
|
|
"""
|
|
General development chat and collaborative thinking tool using SimpleTool architecture.
|
|
|
|
This tool provides identical functionality to the original Chat tool but uses the new
|
|
SimpleTool architecture for cleaner code organization and better maintainability.
|
|
|
|
Migration note: This tool is designed to be a drop-in replacement for the original
|
|
Chat tool with 100% behavioral compatibility.
|
|
"""
|
|
|
|
def get_name(self) -> str:
|
|
return "chat"
|
|
|
|
def get_description(self) -> str:
|
|
return (
|
|
"GENERAL CHAT & COLLABORATIVE THINKING - Use the AI model as your thinking partner! "
|
|
"Perfect for: bouncing ideas during your own analysis, getting second opinions on your plans, "
|
|
"collaborative brainstorming, validating your checklists and approaches, exploring alternatives. "
|
|
"Also great for: explanations, comparisons, general development questions. "
|
|
"Use this when you want to ask questions, brainstorm ideas, get opinions, discuss topics, "
|
|
"share your thinking, or need explanations about concepts and approaches. "
|
|
"Note: If you're not currently using a top-tier model such as Opus 4 or above, these tools can "
|
|
"provide enhanced capabilities."
|
|
)
|
|
|
|
def get_system_prompt(self) -> str:
|
|
return CHAT_PROMPT
|
|
|
|
def get_default_temperature(self) -> float:
|
|
return TEMPERATURE_BALANCED
|
|
|
|
def get_model_category(self) -> "ToolModelCategory":
|
|
"""Chat prioritizes fast responses and cost efficiency"""
|
|
from tools.models import ToolModelCategory
|
|
|
|
return ToolModelCategory.FAST_RESPONSE
|
|
|
|
def get_request_model(self):
|
|
"""Return the Chat-specific request model"""
|
|
return ChatRequest
|
|
|
|
# === Schema Generation ===
|
|
# For maximum compatibility, we override get_input_schema() to match the original Chat tool exactly
|
|
|
|
def get_input_schema(self) -> dict[str, Any]:
|
|
"""
|
|
Generate input schema matching the original Chat tool exactly.
|
|
|
|
This maintains 100% compatibility with the original Chat tool by using
|
|
the same schema generation approach while still benefiting from SimpleTool
|
|
convenience methods.
|
|
"""
|
|
schema = {
|
|
"type": "object",
|
|
"properties": {
|
|
"prompt": {
|
|
"type": "string",
|
|
"description": CHAT_FIELD_DESCRIPTIONS["prompt"],
|
|
},
|
|
"files": {
|
|
"type": "array",
|
|
"items": {"type": "string"},
|
|
"description": CHAT_FIELD_DESCRIPTIONS["files"],
|
|
},
|
|
"images": {
|
|
"type": "array",
|
|
"items": {"type": "string"},
|
|
"description": CHAT_FIELD_DESCRIPTIONS["images"],
|
|
},
|
|
"model": self.get_model_field_schema(),
|
|
"temperature": {
|
|
"type": "number",
|
|
"description": "Response creativity (0-1, default 0.5)",
|
|
"minimum": 0,
|
|
"maximum": 1,
|
|
},
|
|
"thinking_mode": {
|
|
"type": "string",
|
|
"enum": ["minimal", "low", "medium", "high", "max"],
|
|
"description": (
|
|
"Thinking depth: minimal (0.5% of model max), low (8%), medium (33%), high (67%), "
|
|
"max (100% of model max)"
|
|
),
|
|
},
|
|
"use_websearch": {
|
|
"type": "boolean",
|
|
"description": (
|
|
"Enable web search for documentation, best practices, and current information. "
|
|
"Particularly useful for: brainstorming sessions, architectural design discussions, "
|
|
"exploring industry best practices, working with specific frameworks/technologies, "
|
|
"researching solutions to complex problems, or when current documentation and "
|
|
"community insights would enhance the analysis."
|
|
),
|
|
"default": True,
|
|
},
|
|
"continuation_id": {
|
|
"type": "string",
|
|
"description": (
|
|
"Thread continuation ID for multi-turn conversations. Can be used to continue "
|
|
"conversations across different tools. Only provide this if continuing a previous "
|
|
"conversation thread."
|
|
),
|
|
},
|
|
},
|
|
"required": ["prompt"] + (["model"] if self.is_effective_auto_mode() else []),
|
|
}
|
|
|
|
return schema
|
|
|
|
# === Tool-specific field definitions (alternative approach for reference) ===
|
|
# These aren't used since we override get_input_schema(), but they show how
|
|
# the tool could be implemented using the automatic SimpleTool schema building
|
|
|
|
def get_tool_fields(self) -> dict[str, dict[str, Any]]:
|
|
"""
|
|
Tool-specific field definitions for ChatSimple.
|
|
|
|
Note: This method isn't used since we override get_input_schema() for
|
|
exact compatibility, but it demonstrates how ChatSimple could be
|
|
implemented using automatic schema building.
|
|
"""
|
|
return {
|
|
"prompt": {
|
|
"type": "string",
|
|
"description": CHAT_FIELD_DESCRIPTIONS["prompt"],
|
|
},
|
|
"files": {
|
|
"type": "array",
|
|
"items": {"type": "string"},
|
|
"description": CHAT_FIELD_DESCRIPTIONS["files"],
|
|
},
|
|
"images": {
|
|
"type": "array",
|
|
"items": {"type": "string"},
|
|
"description": CHAT_FIELD_DESCRIPTIONS["images"],
|
|
},
|
|
}
|
|
|
|
def get_required_fields(self) -> list[str]:
|
|
"""Required fields for ChatSimple tool"""
|
|
return ["prompt"]
|
|
|
|
# === Hook Method Implementations ===
|
|
|
|
async def prepare_prompt(self, request: ChatRequest) -> str:
|
|
"""
|
|
Prepare the chat prompt with optional context files.
|
|
|
|
This implementation matches the original Chat tool exactly while using
|
|
SimpleTool convenience methods for cleaner code.
|
|
"""
|
|
# Use SimpleTool's Chat-style prompt preparation
|
|
return self.prepare_chat_style_prompt(request)
|
|
|
|
def format_response(self, response: str, request: ChatRequest, model_info: Optional[dict] = None) -> str:
|
|
"""
|
|
Format the chat response to match the original Chat tool exactly.
|
|
"""
|
|
return (
|
|
f"{response}\n\n---\n\n**Claude's Turn:** Evaluate this perspective alongside your analysis to "
|
|
"form a comprehensive solution and continue with the user's request and task at hand."
|
|
)
|
|
|
|
def get_websearch_guidance(self) -> str:
|
|
"""
|
|
Return Chat tool-style web search guidance.
|
|
"""
|
|
return self.get_chat_style_websearch_guidance()
|