Add DocGen tool with comprehensive documentation generation capabilities (#109)

* WIP: new workflow architecture

* WIP: further improvements and cleanup

* WIP: cleanup and docks, replace old tool with new

* WIP: cleanup and docks, replace old tool with new

* WIP: new planner implementation using workflow

* WIP: precommit tool working as a workflow instead of a basic tool
Support for passing False to use_assistant_model to skip external models completely and use Claude only

* WIP: precommit workflow version swapped with old

* WIP: codereview

* WIP: replaced codereview

* WIP: replaced codereview

* WIP: replaced refactor

* WIP: workflow for thinkdeep

* WIP: ensure files get embedded correctly

* WIP: thinkdeep replaced with workflow version

* WIP: improved messaging when an external model's response is received

* WIP: analyze tool swapped

* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only

* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only

* WIP: fixed get_completion_next_steps_message missing param

* Fixed tests
Request for files consistently

* Fixed tests
Request for files consistently

* Fixed tests

* New testgen workflow tool
Updated docs

* Swap testgen workflow

* Fix CI test failures by excluding API-dependent tests

- Update GitHub Actions workflow to exclude simulation tests that require API keys
- Fix collaboration tests to properly mock workflow tool expert analysis calls
- Update test assertions to handle new workflow tool response format
- Ensure unit tests run without external API dependencies in CI

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* WIP - Update tests to match new tools

* WIP - Update tests to match new tools

* WIP - Update tests to match new tools

* Should help with https://github.com/BeehiveInnovations/zen-mcp-server/issues/97
Clear python cache when running script: https://github.com/BeehiveInnovations/zen-mcp-server/issues/96
Improved retry error logging
Cleanup

* WIP - chat tool using new architecture and improved code sharing

* Removed todo

* Removed todo

* Cleanup old name

* Tweak wordings

* Tweak wordings
Migrate old tests

* Support for Flash 2.0 and Flash Lite 2.0

* Support for Flash 2.0 and Flash Lite 2.0

* Support for Flash 2.0 and Flash Lite 2.0
Fixed test

* Improved consensus to use the workflow base class

* Improved consensus to use the workflow base class

* Allow images

* Allow images

* Replaced old consensus tool

* Cleanup tests

* Tests for prompt size

* New tool: docgen
Tests for prompt size
Fixes: https://github.com/BeehiveInnovations/zen-mcp-server/issues/107
Use available token size limits: https://github.com/BeehiveInnovations/zen-mcp-server/issues/105

* Improved docgen prompt
Exclude TestGen from pytest inclusion

* Updated errors

* Lint

* DocGen instructed not to fix bugs, surface them and stick to d

* WIP

* Stop claude from being lazy and only documenting a small handful

* More style rules

---------

Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
Beehive Innovations
2025-06-21 23:21:19 -07:00
committed by GitHub
parent 0655590a51
commit c960bcb720
58 changed files with 5492 additions and 5558 deletions

View File

@@ -21,7 +21,12 @@ class CrossToolComprehensiveTest(ConversationBaseTest):
def call_mcp_tool(self, tool_name: str, params: dict) -> tuple:
"""Call an MCP tool in-process"""
response_text, continuation_id = self.call_mcp_tool_direct(tool_name, params)
# Use the new method for workflow tools
workflow_tools = ["analyze", "debug", "codereview", "precommit", "refactor", "thinkdeep"]
if tool_name in workflow_tools:
response_text, continuation_id = super().call_mcp_tool(tool_name, params)
else:
response_text, continuation_id = self.call_mcp_tool_direct(tool_name, params)
return response_text, continuation_id
@property
@@ -96,8 +101,12 @@ def hash_pwd(pwd):
# Step 2: Use analyze tool to do deeper analysis (fresh conversation)
self.logger.info(" Step 2: analyze tool - Deep code analysis (fresh)")
analyze_params = {
"files": [auth_file],
"prompt": "Find vulnerabilities",
"step": "Starting comprehensive code analysis to find security vulnerabilities in the authentication system",
"step_number": 1,
"total_steps": 2,
"next_step_required": True,
"findings": "Initial analysis will focus on security vulnerabilities in authentication code",
"relevant_files": [auth_file],
"thinking_mode": "low",
"model": "flash",
}
@@ -133,8 +142,12 @@ def hash_pwd(pwd):
# Step 4: Use debug tool to identify specific issues
self.logger.info(" Step 4: debug tool - Identify specific problems")
debug_params = {
"files": [auth_file, config_file_path],
"prompt": "Fix auth issues",
"step": "Starting debug investigation to identify and fix authentication security issues",
"step_number": 1,
"total_steps": 2,
"next_step_required": True,
"findings": "Investigating authentication vulnerabilities found in previous analysis",
"relevant_files": [auth_file, config_file_path],
"thinking_mode": "low",
"model": "flash",
}
@@ -153,9 +166,13 @@ def hash_pwd(pwd):
if continuation_id4:
self.logger.info(" Step 5: debug continuation - Additional analysis")
debug_continue_params = {
"step": "Continuing debug investigation to fix password hashing implementation",
"step_number": 2,
"total_steps": 2,
"next_step_required": False,
"findings": "Building on previous analysis to fix weak password hashing",
"continuation_id": continuation_id4,
"files": [auth_file, config_file_path],
"prompt": "Fix password hashing",
"relevant_files": [auth_file, config_file_path],
"thinking_mode": "low",
"model": "flash",
}
@@ -168,8 +185,12 @@ def hash_pwd(pwd):
# Step 6: Use codereview for comprehensive review
self.logger.info(" Step 6: codereview tool - Comprehensive code review")
codereview_params = {
"files": [auth_file, config_file_path],
"prompt": "Security review",
"step": "Starting comprehensive security code review of authentication system",
"step_number": 1,
"total_steps": 2,
"next_step_required": True,
"findings": "Performing thorough security review of authentication code and configuration",
"relevant_files": [auth_file, config_file_path],
"thinking_mode": "low",
"model": "flash",
}
@@ -201,9 +222,13 @@ def secure_login(user, pwd):
improved_file = self.create_additional_test_file("auth_improved.py", improved_code)
precommit_params = {
"step": "Starting pre-commit validation of improved authentication code",
"step_number": 1,
"total_steps": 2,
"next_step_required": True,
"findings": "Validating improved authentication implementation before commit",
"path": self.test_dir,
"files": [auth_file, config_file_path, improved_file],
"prompt": "Ready to commit",
"relevant_files": [auth_file, config_file_path, improved_file],
"thinking_mode": "low",
"model": "flash",
}