Advertise prompts, fixes https://github.com/BeehiveInnovations/zen-mcp-server/issues/63
This commit is contained in:
46
README.md
46
README.md
@@ -683,43 +683,31 @@ For detailed tool parameters and configuration options, see the [Advanced Usage
|
||||
|
||||
Zen supports powerful structured prompts in Claude Code for quick access to tools and models:
|
||||
|
||||
#### Basic Tool Prompts
|
||||
- `/zen:thinkdeeper` - Use thinkdeep tool with auto-selected model
|
||||
- `/zen:chat` - Use chat tool with auto-selected model
|
||||
- `/zen:consensus` - Use consensus tool with auto-selected models
|
||||
- `/zen:codereview` - Use codereview tool with auto-selected model
|
||||
- `/zen:analyze` - Use analyze tool with auto-selected model
|
||||
|
||||
#### Model-Specific Tool Prompts
|
||||
- `/zen:chat:o3 hello there` - Use chat tool specifically with O3 model
|
||||
- `/zen:thinkdeep:flash analyze this quickly` - Use thinkdeep tool with Flash for speed
|
||||
- `/zen:consensus:pro,flash:for,o3:against debate this proposal` - Use consensus with specific model stances
|
||||
- `/zen:codereview:pro review for security` - Use codereview tool with Gemini Pro for thorough analysis
|
||||
- `/zen:debug:grok help with this error` - Use debug tool with GROK model
|
||||
- `/zen:analyze:gemini-2.5-flash-preview-05-20 examine these files` - Use analyze tool with specific Gemini model
|
||||
#### Tool Prompts
|
||||
- `/zen:chat ask local-llama what 2 + 2 is` - Use chat tool with auto-selected model
|
||||
- `/zen:thinkdeep use o3 and tell me why the code isn't working in sorting.swift` - Use thinkdeep tool with auto-selected model
|
||||
- `/zen:consensus use o3:for and flash:against and tell me if adding feature X is a good idea for the project. Pass them a summary of what it does.` - Use consensus tool with default configuration
|
||||
- `/zen:codereview review for security module ABC` - Use codereview tool with auto-selected model
|
||||
- `/zen:debug table view is not scrolling properly, very jittery, I suspect the code is in my_controller.m` - Use debug tool with auto-selected model
|
||||
- `/zen:analyze examine these files and tell me what if I'm using the CoreAudio framework properly` - Use analyze tool with auto-selected model
|
||||
|
||||
#### Continuation Prompts
|
||||
- `/zen:continue` - Continue previous conversation using chat tool
|
||||
- `/zen:chat:continue` - Continue previous conversation using chat tool specifically
|
||||
- `/zen:thinkdeep:continue` - Continue previous conversation using thinkdeep tool
|
||||
- `/zen:consensus:continue` - Continue previous consensus discussion with additional analysis
|
||||
- `/zen:analyze:continue` - Continue previous conversation using analyze tool
|
||||
- `/zen:chat continue and ask gemini pro if framework B is better` - Continue previous conversation using chat tool
|
||||
|
||||
#### Advanced Examples
|
||||
- `/zen:thinkdeeper:o3 check if the algorithm in @sort.py is performant and if there are alternatives we could explore`
|
||||
- `/zen:consensus:flash:for,o3:against,pro:neutral debate whether we should migrate to GraphQL for our API`
|
||||
- `/zen:precommit:pro confirm these changes match our requirements in COOL_FEATURE.md`
|
||||
- `/zen:testgen:flash write me tests for class ABC`
|
||||
- `/zen:refactor:local-llama propose a decomposition strategy, make a plan and save it in FIXES.md then share this with o3 to confirm along with large_file.swift`
|
||||
- `/zen:thinkdeeper check if the algorithm in @sort.py is performant and if there are alternatives we could explore`
|
||||
- `/zen:consensus debate whether we should migrate to GraphQL for our API`
|
||||
- `/zen:precommit confirm these changes match our requirements in COOL_FEATURE.md`
|
||||
- `/zen:testgen write me tests for class ABC`
|
||||
- `/zen:refactor propose a decomposition strategy, make a plan and save it in FIXES.md`
|
||||
|
||||
#### Syntax Format
|
||||
The structured prompt format is: `/zen:[tool]:[model / continue] [your_message]`
|
||||
The prompt format is: `/zen:[tool] [your_message]`
|
||||
|
||||
- `[tool]` - Any available tool name (chat, thinkdeep, codereview, debug, analyze, etc.)
|
||||
- `[model / continue]` - Either a specific model name (o3, flash, pro, grok, etc.) or the keyword `continue` to continue the conversation using this tool
|
||||
- `[your_message]` - Your actual prompt or question
|
||||
- `[tool]` - Any available tool name (chat, thinkdeep, codereview, debug, analyze, consensus, etc.)
|
||||
- `[your_message]` - Your request, question, or instructions for the tool
|
||||
|
||||
**Note**: When using `:continue`, it intelligently resumes the previous conversation with the specified tool, maintaining full context and conversation history.
|
||||
**Note:** All prompts will show as "(MCP) [tool]" in Claude Code to indicate they're provided by the MCP server.
|
||||
|
||||
### Add Your Own Tools
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ import os
|
||||
# These values are used in server responses and for tracking releases
|
||||
# IMPORTANT: This is the single source of truth for version and author info
|
||||
# Semantic versioning: MAJOR.MINOR.PATCH
|
||||
__version__ = "4.8.3"
|
||||
__version__ = "4.9.1"
|
||||
# Last update date in ISO format
|
||||
__updated__ = "2025-06-16"
|
||||
# Primary maintainer
|
||||
|
||||
84
server.py
84
server.py
@@ -34,6 +34,7 @@ from mcp.types import (
|
||||
GetPromptResult,
|
||||
Prompt,
|
||||
PromptMessage,
|
||||
PromptsCapability,
|
||||
ServerCapabilities,
|
||||
TextContent,
|
||||
Tool,
|
||||
@@ -1065,71 +1066,46 @@ async def handle_get_prompt(name: str, arguments: dict[str, Any] = None) -> GetP
|
||||
"""
|
||||
logger.debug(f"MCP client requested prompt: {name} with args: {arguments}")
|
||||
|
||||
# Parse structured prompt names like "chat:o3", "chat:continue", or "consensus:flash:for,o3:against,pro:neutral"
|
||||
parsed_model = None
|
||||
is_continuation = False
|
||||
consensus_models = None
|
||||
base_name = name
|
||||
|
||||
if ":" in name:
|
||||
parts = name.split(":", 1)
|
||||
base_name = parts[0]
|
||||
second_part = parts[1]
|
||||
|
||||
# Check if the second part is "continue" (special keyword)
|
||||
if second_part.lower() == "continue":
|
||||
is_continuation = True
|
||||
logger.debug(f"Parsed continuation prompt: tool='{base_name}', continue=True")
|
||||
elif base_name == "consensus" and "," in second_part:
|
||||
# Handle consensus tool format: "consensus:flash:for,o3:against,pro:neutral"
|
||||
consensus_models = ConsensusTool.parse_structured_prompt_models(second_part)
|
||||
logger.debug(f"Parsed consensus prompt with models: {consensus_models}")
|
||||
else:
|
||||
parsed_model = second_part
|
||||
logger.debug(f"Parsed structured prompt: tool='{base_name}', model='{parsed_model}'")
|
||||
|
||||
# Handle special "continue" cases
|
||||
if base_name.lower() == "continue":
|
||||
# Handle special "continue" case
|
||||
if name.lower() == "continue":
|
||||
# This is "/zen:continue" - use chat tool as default for continuation
|
||||
tool_name = "chat"
|
||||
is_continuation = True
|
||||
template_info = {
|
||||
"name": "continue",
|
||||
"description": "Continue the previous conversation",
|
||||
"template": "Continue the conversation",
|
||||
}
|
||||
logger.debug("Using /zen:continue - defaulting to chat tool with continuation")
|
||||
logger.debug("Using /zen:continue - defaulting to chat tool")
|
||||
else:
|
||||
# Find the corresponding tool by checking prompt names
|
||||
tool_name = None
|
||||
template_info = None
|
||||
|
||||
# Check if it's a known prompt name (using base_name)
|
||||
# Check if it's a known prompt name
|
||||
for t_name, t_info in PROMPT_TEMPLATES.items():
|
||||
if t_info["name"] == base_name:
|
||||
if t_info["name"] == name:
|
||||
tool_name = t_name
|
||||
template_info = t_info
|
||||
break
|
||||
|
||||
# If not found, check if it's a direct tool name
|
||||
if not tool_name and base_name in TOOLS:
|
||||
tool_name = base_name
|
||||
if not tool_name and name in TOOLS:
|
||||
tool_name = name
|
||||
template_info = {
|
||||
"name": base_name,
|
||||
"description": f"Use {base_name} tool",
|
||||
"template": f"Use {base_name}",
|
||||
"name": name,
|
||||
"description": f"Use {name} tool",
|
||||
"template": f"Use {name}",
|
||||
}
|
||||
|
||||
if not tool_name:
|
||||
logger.error(f"Unknown prompt requested: {name} (base: {base_name})")
|
||||
logger.error(f"Unknown prompt requested: {name}")
|
||||
raise ValueError(f"Unknown prompt: {name}")
|
||||
|
||||
# Get the template
|
||||
template = template_info.get("template", f"Use {tool_name}")
|
||||
|
||||
# Safe template expansion with defaults
|
||||
# Prioritize: parsed model > arguments model > "auto"
|
||||
final_model = parsed_model or (arguments.get("model", "auto") if arguments else "auto")
|
||||
final_model = arguments.get("model", "auto") if arguments else "auto"
|
||||
|
||||
prompt_args = {
|
||||
"model": final_model,
|
||||
@@ -1145,31 +1121,12 @@ async def handle_get_prompt(name: str, arguments: dict[str, Any] = None) -> GetP
|
||||
logger.warning(f"Missing template argument {e} for prompt {name}, using raw template")
|
||||
prompt_text = template # Fallback to raw template
|
||||
|
||||
# Generate tool call instruction based on the type of prompt
|
||||
if is_continuation:
|
||||
if base_name.lower() == "continue":
|
||||
# "/zen:continue" case
|
||||
tool_instruction = f"Continue the previous conversation using the {tool_name} tool"
|
||||
else:
|
||||
# "/zen:chat:continue" case
|
||||
tool_instruction = f"Continue the previous conversation using the {tool_name} tool"
|
||||
elif consensus_models:
|
||||
# "/zen:consensus:flash:for,o3:against,pro:neutral" case
|
||||
model_descriptions = []
|
||||
for model_config in consensus_models:
|
||||
if model_config["stance"] != "neutral":
|
||||
model_descriptions.append(f"{model_config['model']} with {model_config['stance']} stance")
|
||||
else:
|
||||
model_descriptions.append(f"{model_config['model']} with neutral stance")
|
||||
|
||||
models_text = ", ".join(model_descriptions)
|
||||
models_json = str(consensus_models).replace("'", '"') # Convert to JSON-like format for Claude
|
||||
tool_instruction = f"Use the {tool_name} tool with models: {models_text}. Call the consensus tool with prompt='debate this proposal' and models={models_json}"
|
||||
elif parsed_model:
|
||||
# "/zen:chat:o3" case
|
||||
tool_instruction = f"Use the {tool_name} tool with model '{parsed_model}'"
|
||||
# Generate tool call instruction
|
||||
if name.lower() == "continue":
|
||||
# "/zen:continue" case
|
||||
tool_instruction = f"Continue the previous conversation using the {tool_name} tool"
|
||||
else:
|
||||
# "/zen:chat" case
|
||||
# Simple prompt case
|
||||
tool_instruction = prompt_text
|
||||
|
||||
return GetPromptResult(
|
||||
@@ -1230,7 +1187,10 @@ async def main():
|
||||
InitializationOptions(
|
||||
server_name="zen",
|
||||
server_version=__version__,
|
||||
capabilities=ServerCapabilities(tools=ToolsCapability()), # Advertise tool support capability
|
||||
capabilities=ServerCapabilities(
|
||||
tools=ToolsCapability(), # Advertise tool support capability
|
||||
prompts=PromptsCapability(), # Advertise prompt support capability
|
||||
),
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
@@ -198,49 +198,6 @@ class TestConsensusTool(unittest.TestCase):
|
||||
self.assertIn("pro:against", models_used) # critical -> against
|
||||
self.assertIn("grok", models_used) # neutral (no suffix)
|
||||
|
||||
def test_parse_structured_prompt_models_comprehensive(self):
|
||||
"""Test the structured prompt parsing method"""
|
||||
# Test basic parsing
|
||||
result = ConsensusTool.parse_structured_prompt_models("flash:for,o3:against,pro:neutral")
|
||||
expected = [
|
||||
{"model": "flash", "stance": "for"},
|
||||
{"model": "o3", "stance": "against"},
|
||||
{"model": "pro", "stance": "neutral"},
|
||||
]
|
||||
self.assertEqual(result, expected)
|
||||
|
||||
# Test with defaults
|
||||
result = ConsensusTool.parse_structured_prompt_models("flash:for,o3:against,pro")
|
||||
expected = [
|
||||
{"model": "flash", "stance": "for"},
|
||||
{"model": "o3", "stance": "against"},
|
||||
{"model": "pro", "stance": "neutral"}, # Defaults to neutral
|
||||
]
|
||||
self.assertEqual(result, expected)
|
||||
|
||||
# Test all neutral
|
||||
result = ConsensusTool.parse_structured_prompt_models("flash,o3,pro")
|
||||
expected = [
|
||||
{"model": "flash", "stance": "neutral"},
|
||||
{"model": "o3", "stance": "neutral"},
|
||||
{"model": "pro", "stance": "neutral"},
|
||||
]
|
||||
self.assertEqual(result, expected)
|
||||
|
||||
# Test with whitespace
|
||||
result = ConsensusTool.parse_structured_prompt_models(" flash:for , o3:against , pro ")
|
||||
expected = [
|
||||
{"model": "flash", "stance": "for"},
|
||||
{"model": "o3", "stance": "against"},
|
||||
{"model": "pro", "stance": "neutral"},
|
||||
]
|
||||
self.assertEqual(result, expected)
|
||||
|
||||
# Test single model
|
||||
result = ConsensusTool.parse_structured_prompt_models("flash:for")
|
||||
expected = [{"model": "flash", "stance": "for"}]
|
||||
self.assertEqual(result, expected)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
|
||||
@@ -4,8 +4,7 @@ Tests for the main server functionality
|
||||
|
||||
import pytest
|
||||
|
||||
from server import handle_call_tool, handle_get_prompt, handle_list_tools
|
||||
from tools.consensus import ConsensusTool
|
||||
from server import handle_call_tool, handle_list_tools
|
||||
|
||||
|
||||
class TestServerTools:
|
||||
@@ -37,134 +36,6 @@ class TestServerTools:
|
||||
for tool in tools:
|
||||
assert len(tool.description) > 50 # All should have detailed descriptions
|
||||
|
||||
|
||||
class TestStructuredPrompts:
|
||||
"""Test structured prompt parsing functionality"""
|
||||
|
||||
def test_parse_consensus_models_basic(self):
|
||||
"""Test parsing basic consensus model specifications"""
|
||||
# Test with explicit stances
|
||||
result = ConsensusTool.parse_structured_prompt_models("flash:for,o3:against,pro:neutral")
|
||||
expected = [
|
||||
{"model": "flash", "stance": "for"},
|
||||
{"model": "o3", "stance": "against"},
|
||||
{"model": "pro", "stance": "neutral"},
|
||||
]
|
||||
assert result == expected
|
||||
|
||||
def test_parse_consensus_models_mixed(self):
|
||||
"""Test parsing consensus models with mixed stance specifications"""
|
||||
# Test with some models having explicit stances, others defaulting to neutral
|
||||
result = ConsensusTool.parse_structured_prompt_models("flash:for,o3:against,pro")
|
||||
expected = [
|
||||
{"model": "flash", "stance": "for"},
|
||||
{"model": "o3", "stance": "against"},
|
||||
{"model": "pro", "stance": "neutral"}, # Defaults to neutral
|
||||
]
|
||||
assert result == expected
|
||||
|
||||
def test_parse_consensus_models_all_neutral(self):
|
||||
"""Test parsing consensus models with all neutral stances"""
|
||||
result = ConsensusTool.parse_structured_prompt_models("flash,o3,pro")
|
||||
expected = [
|
||||
{"model": "flash", "stance": "neutral"},
|
||||
{"model": "o3", "stance": "neutral"},
|
||||
{"model": "pro", "stance": "neutral"},
|
||||
]
|
||||
assert result == expected
|
||||
|
||||
def test_parse_consensus_models_single(self):
|
||||
"""Test parsing single consensus model"""
|
||||
result = ConsensusTool.parse_structured_prompt_models("flash:for")
|
||||
expected = [{"model": "flash", "stance": "for"}]
|
||||
assert result == expected
|
||||
|
||||
def test_parse_consensus_models_whitespace(self):
|
||||
"""Test parsing consensus models with extra whitespace"""
|
||||
result = ConsensusTool.parse_structured_prompt_models(" flash:for , o3:against , pro ")
|
||||
expected = [
|
||||
{"model": "flash", "stance": "for"},
|
||||
{"model": "o3", "stance": "against"},
|
||||
{"model": "pro", "stance": "neutral"},
|
||||
]
|
||||
assert result == expected
|
||||
|
||||
def test_parse_consensus_models_synonyms(self):
|
||||
"""Test parsing consensus models with stance synonyms"""
|
||||
result = ConsensusTool.parse_structured_prompt_models("flash:support,o3:oppose,pro:favor")
|
||||
expected = [
|
||||
{"model": "flash", "stance": "support"},
|
||||
{"model": "o3", "stance": "oppose"},
|
||||
{"model": "pro", "stance": "favor"},
|
||||
]
|
||||
assert result == expected
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_consensus_structured_prompt_parsing(self):
|
||||
"""Test full consensus structured prompt parsing pipeline"""
|
||||
# Test parsing a complex consensus prompt
|
||||
prompt_name = "consensus:flash:for,o3:against,pro:neutral"
|
||||
|
||||
try:
|
||||
result = await handle_get_prompt(prompt_name)
|
||||
|
||||
# Check that it returns a valid GetPromptResult
|
||||
assert result.prompt.name == prompt_name
|
||||
assert result.prompt.description is not None
|
||||
assert len(result.messages) == 1
|
||||
assert result.messages[0].role == "user"
|
||||
|
||||
# Check that the instruction contains the expected model configurations
|
||||
instruction_text = result.messages[0].content.text
|
||||
assert "consensus" in instruction_text
|
||||
assert "flash with for stance" in instruction_text
|
||||
assert "o3 with against stance" in instruction_text
|
||||
assert "pro with neutral stance" in instruction_text
|
||||
|
||||
# Check that the JSON model configuration is included
|
||||
assert '"model": "flash", "stance": "for"' in instruction_text
|
||||
assert '"model": "o3", "stance": "against"' in instruction_text
|
||||
assert '"model": "pro", "stance": "neutral"' in instruction_text
|
||||
|
||||
except ValueError as e:
|
||||
# If consensus tool is not properly configured, this might fail
|
||||
# In that case, just check our parsing function works
|
||||
assert str(e) == "Unknown prompt: consensus:flash:for,o3:against,pro:neutral"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_consensus_prompt_practical_example(self):
|
||||
"""Test practical consensus prompt examples from README"""
|
||||
examples = [
|
||||
"consensus:flash:for,o3:against,pro:neutral",
|
||||
"consensus:flash:support,o3:critical,pro",
|
||||
"consensus:gemini:for,grok:against",
|
||||
]
|
||||
|
||||
for example in examples:
|
||||
try:
|
||||
result = await handle_get_prompt(example)
|
||||
instruction = result.messages[0].content.text
|
||||
|
||||
# Should contain consensus tool usage
|
||||
assert "consensus" in instruction.lower()
|
||||
|
||||
# Should contain model configurations in JSON format
|
||||
assert "[{" in instruction and "}]" in instruction
|
||||
|
||||
# Should contain stance information for models that have it
|
||||
if ":for" in example:
|
||||
assert '"stance": "for"' in instruction
|
||||
if ":against" in example:
|
||||
assert '"stance": "against"' in instruction
|
||||
if ":support" in example:
|
||||
assert '"stance": "support"' in instruction
|
||||
if ":critical" in example:
|
||||
assert '"stance": "critical"' in instruction
|
||||
|
||||
except ValueError:
|
||||
# Some examples might fail if tool isn't configured
|
||||
pass
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handle_call_tool_unknown(self):
|
||||
"""Test calling an unknown tool"""
|
||||
|
||||
@@ -91,57 +91,6 @@ class ConsensusTool(BaseTool):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
|
||||
@staticmethod
|
||||
def parse_structured_prompt_models(model_spec: str) -> list[dict[str, str]]:
|
||||
"""
|
||||
Parse consensus model specification from structured prompt format.
|
||||
|
||||
This method parses structured prompt specifications used in Claude Code shortcuts
|
||||
like "/zen:consensus:flash:for,o3:against,pro:neutral" to extract model configurations
|
||||
with their assigned stances.
|
||||
|
||||
Supported formats:
|
||||
- "model:stance" - Explicit stance assignment (e.g., "flash:for", "o3:against")
|
||||
- "model" - Defaults to neutral stance (e.g., "pro" becomes "pro:neutral")
|
||||
|
||||
Supported stances:
|
||||
- Supportive: "for", "support", "favor"
|
||||
- Critical: "against", "oppose", "critical"
|
||||
- Neutral: "neutral" (default)
|
||||
|
||||
Args:
|
||||
model_spec (str): Comma-separated model specification string.
|
||||
Examples: "flash:for,o3:against,pro:neutral" or "flash:for,o3:against,pro"
|
||||
|
||||
Returns:
|
||||
list[dict[str, str]]: List of model configuration dictionaries with keys:
|
||||
- "model": The model name (e.g., "flash", "o3", "pro")
|
||||
- "stance": The normalized stance (e.g., "for", "against", "neutral")
|
||||
|
||||
Examples:
|
||||
>>> ConsensusTool.parse_structured_prompt_models("flash:for,o3:against,pro")
|
||||
[{"model": "flash", "stance": "for"}, {"model": "o3", "stance": "against"}, {"model": "pro", "stance": "neutral"}]
|
||||
|
||||
>>> ConsensusTool.parse_structured_prompt_models("flash,o3,pro")
|
||||
[{"model": "flash", "stance": "neutral"}, {"model": "o3", "stance": "neutral"}, {"model": "pro", "stance": "neutral"}]
|
||||
"""
|
||||
models = []
|
||||
|
||||
# Split by comma to get individual model specs
|
||||
model_parts = model_spec.split(",")
|
||||
|
||||
for part in model_parts:
|
||||
part = part.strip()
|
||||
if ":" in part:
|
||||
# Model with stance: "flash:for" or "o3:against"
|
||||
model_name, stance = part.split(":", 1)
|
||||
models.append({"model": model_name.strip(), "stance": stance.strip()})
|
||||
else:
|
||||
# Model without stance (defaults to neutral): "pro"
|
||||
models.append({"model": part.strip(), "stance": "neutral"})
|
||||
|
||||
return models
|
||||
|
||||
def get_name(self) -> str:
|
||||
return "consensus"
|
||||
|
||||
|
||||
Reference in New Issue
Block a user