Add DocGen tool with comprehensive documentation generation capabilities (#109)

* WIP: new workflow architecture

* WIP: further improvements and cleanup

* WIP: cleanup and docks, replace old tool with new

* WIP: cleanup and docks, replace old tool with new

* WIP: new planner implementation using workflow

* WIP: precommit tool working as a workflow instead of a basic tool
Support for passing False to use_assistant_model to skip external models completely and use Claude only

* WIP: precommit workflow version swapped with old

* WIP: codereview

* WIP: replaced codereview

* WIP: replaced codereview

* WIP: replaced refactor

* WIP: workflow for thinkdeep

* WIP: ensure files get embedded correctly

* WIP: thinkdeep replaced with workflow version

* WIP: improved messaging when an external model's response is received

* WIP: analyze tool swapped

* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only

* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only

* WIP: fixed get_completion_next_steps_message missing param

* Fixed tests
Request for files consistently

* Fixed tests
Request for files consistently

* Fixed tests

* New testgen workflow tool
Updated docs

* Swap testgen workflow

* Fix CI test failures by excluding API-dependent tests

- Update GitHub Actions workflow to exclude simulation tests that require API keys
- Fix collaboration tests to properly mock workflow tool expert analysis calls
- Update test assertions to handle new workflow tool response format
- Ensure unit tests run without external API dependencies in CI

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* WIP - Update tests to match new tools

* WIP - Update tests to match new tools

* WIP - Update tests to match new tools

* Should help with https://github.com/BeehiveInnovations/zen-mcp-server/issues/97
Clear python cache when running script: https://github.com/BeehiveInnovations/zen-mcp-server/issues/96
Improved retry error logging
Cleanup

* WIP - chat tool using new architecture and improved code sharing

* Removed todo

* Removed todo

* Cleanup old name

* Tweak wordings

* Tweak wordings
Migrate old tests

* Support for Flash 2.0 and Flash Lite 2.0

* Support for Flash 2.0 and Flash Lite 2.0

* Support for Flash 2.0 and Flash Lite 2.0
Fixed test

* Improved consensus to use the workflow base class

* Improved consensus to use the workflow base class

* Allow images

* Allow images

* Replaced old consensus tool

* Cleanup tests

* Tests for prompt size

* New tool: docgen
Tests for prompt size
Fixes: https://github.com/BeehiveInnovations/zen-mcp-server/issues/107
Use available token size limits: https://github.com/BeehiveInnovations/zen-mcp-server/issues/105

* Improved docgen prompt
Exclude TestGen from pytest inclusion

* Updated errors

* Lint

* DocGen instructed not to fix bugs, surface them and stick to d

* WIP

* Stop claude from being lazy and only documenting a small handful

* More style rules

---------

Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
Beehive Innovations
2025-06-21 23:21:19 -07:00
committed by GitHub
parent 0655590a51
commit c960bcb720
58 changed files with 5492 additions and 5558 deletions

View File

@@ -14,9 +14,9 @@ import os
# These values are used in server responses and for tracking releases
# IMPORTANT: This is the single source of truth for version and author info
# Semantic versioning: MAJOR.MINOR.PATCH
__version__ = "5.5.3"
__version__ = "5.5.5"
# Last update date in ISO format
__updated__ = "2025-06-21"
__updated__ = "2025-06-22"
# Primary maintainer
__author__ = "Fahad Gilani"
@@ -82,13 +82,16 @@ DEFAULT_CONSENSUS_MAX_INSTANCES_PER_COMBINATION = 2
# ↑ ↑
# │ │
# MCP transport Internal processing
# (25K token limit) (No MCP limit - can be 1M+ tokens)
# (token limit from MAX_MCP_OUTPUT_TOKENS) (No MCP limit - can be 1M+ tokens)
#
# MCP_PROMPT_SIZE_LIMIT: Maximum character size for USER INPUT crossing MCP transport
# The MCP protocol has a combined request+response limit of ~25K tokens total.
# The MCP protocol has a combined request+response limit controlled by MAX_MCP_OUTPUT_TOKENS.
# To ensure adequate space for MCP Server → Claude CLI responses, we limit user input
# to 50K characters (roughly ~10-12K tokens). Larger user prompts must be sent
# as prompt.txt files to bypass MCP's transport constraints.
# to roughly 60% of the total token budget converted to characters. Larger user prompts
# must be sent as prompt.txt files to bypass MCP's transport constraints.
#
# Token to character conversion ratio: ~4 characters per token (average for code/text)
# Default allocation: 60% of tokens for input, 40% for response
#
# What IS limited by this constant:
# - request.prompt field content (user input from Claude CLI)
@@ -104,7 +107,34 @@ DEFAULT_CONSENSUS_MAX_INSTANCES_PER_COMBINATION = 2
#
# This ensures MCP transport stays within protocol limits while allowing internal
# processing to use full model context windows (200K-1M+ tokens).
MCP_PROMPT_SIZE_LIMIT = 50_000 # 50K characters (user input only)
def _calculate_mcp_prompt_limit() -> int:
"""
Calculate MCP prompt size limit based on MAX_MCP_OUTPUT_TOKENS environment variable.
Returns:
Maximum character count for user input prompts
"""
# Check for Claude's MAX_MCP_OUTPUT_TOKENS environment variable
max_tokens_str = os.getenv("MAX_MCP_OUTPUT_TOKENS")
if max_tokens_str:
try:
max_tokens = int(max_tokens_str)
# Allocate 60% of tokens for input, convert to characters (~4 chars per token)
input_token_budget = int(max_tokens * 0.6)
character_limit = input_token_budget * 4
return character_limit
except (ValueError, TypeError):
# Fall back to default if MAX_MCP_OUTPUT_TOKENS is not a valid integer
pass
# Default fallback: 60,000 characters (equivalent to ~15k tokens input of 25k total)
return 60_000
MCP_PROMPT_SIZE_LIMIT = _calculate_mcp_prompt_limit()
# Threading configuration
# Simple in-memory conversation threading for stateless MCP environment