Add DocGen tool with comprehensive documentation generation capabilities (#109)

* WIP: new workflow architecture

* WIP: further improvements and cleanup

* WIP: cleanup and docks, replace old tool with new

* WIP: cleanup and docks, replace old tool with new

* WIP: new planner implementation using workflow

* WIP: precommit tool working as a workflow instead of a basic tool
Support for passing False to use_assistant_model to skip external models completely and use Claude only

* WIP: precommit workflow version swapped with old

* WIP: codereview

* WIP: replaced codereview

* WIP: replaced codereview

* WIP: replaced refactor

* WIP: workflow for thinkdeep

* WIP: ensure files get embedded correctly

* WIP: thinkdeep replaced with workflow version

* WIP: improved messaging when an external model's response is received

* WIP: analyze tool swapped

* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only

* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only

* WIP: fixed get_completion_next_steps_message missing param

* Fixed tests
Request for files consistently

* Fixed tests
Request for files consistently

* Fixed tests

* New testgen workflow tool
Updated docs

* Swap testgen workflow

* Fix CI test failures by excluding API-dependent tests

- Update GitHub Actions workflow to exclude simulation tests that require API keys
- Fix collaboration tests to properly mock workflow tool expert analysis calls
- Update test assertions to handle new workflow tool response format
- Ensure unit tests run without external API dependencies in CI

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* WIP - Update tests to match new tools

* WIP - Update tests to match new tools

* WIP - Update tests to match new tools

* Should help with https://github.com/BeehiveInnovations/zen-mcp-server/issues/97
Clear python cache when running script: https://github.com/BeehiveInnovations/zen-mcp-server/issues/96
Improved retry error logging
Cleanup

* WIP - chat tool using new architecture and improved code sharing

* Removed todo

* Removed todo

* Cleanup old name

* Tweak wordings

* Tweak wordings
Migrate old tests

* Support for Flash 2.0 and Flash Lite 2.0

* Support for Flash 2.0 and Flash Lite 2.0

* Support for Flash 2.0 and Flash Lite 2.0
Fixed test

* Improved consensus to use the workflow base class

* Improved consensus to use the workflow base class

* Allow images

* Allow images

* Replaced old consensus tool

* Cleanup tests

* Tests for prompt size

* New tool: docgen
Tests for prompt size
Fixes: https://github.com/BeehiveInnovations/zen-mcp-server/issues/107
Use available token size limits: https://github.com/BeehiveInnovations/zen-mcp-server/issues/105

* Improved docgen prompt
Exclude TestGen from pytest inclusion

* Updated errors

* Lint

* DocGen instructed not to fix bugs, surface them and stick to d

* WIP

* Stop claude from being lazy and only documenting a small handful

* More style rules

---------

Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
Beehive Innovations
2025-06-21 23:21:19 -07:00
committed by GitHub
parent 0655590a51
commit c960bcb720
58 changed files with 5492 additions and 5558 deletions

View File

@@ -71,10 +71,8 @@ class TestModelEnumeration:
importlib.reload(config)
# Reload tools.base to ensure fresh state
import tools.base
importlib.reload(tools.base)
# Note: tools.base has been refactored to tools.shared.base_tool and tools.simple.base
# No longer need to reload as configuration is handled at provider level
def test_no_models_when_no_providers_configured(self):
"""Test that no native models are included when no providers are configured."""
@@ -97,11 +95,6 @@ class TestModelEnumeration:
len(non_openrouter_models) == 0
), f"No native models should be available without API keys, but found: {non_openrouter_models}"
@pytest.mark.skip(reason="Complex integration test - rely on simulator tests for provider testing")
def test_openrouter_models_with_api_key(self):
"""Test that OpenRouter models are included when API key is configured."""
pass
def test_openrouter_models_without_api_key(self):
"""Test that OpenRouter models are NOT included when API key is not configured."""
self._setup_environment({}) # No OpenRouter key
@@ -115,11 +108,6 @@ class TestModelEnumeration:
assert found_count == 0, "OpenRouter models should not be included without API key"
@pytest.mark.skip(reason="Integration test - rely on simulator tests for API testing")
def test_custom_models_with_custom_url(self):
"""Test that custom models are included when CUSTOM_API_URL is configured."""
pass
def test_custom_models_without_custom_url(self):
"""Test that custom models are NOT included when CUSTOM_API_URL is not configured."""
self._setup_environment({}) # No custom URL
@@ -133,16 +121,6 @@ class TestModelEnumeration:
assert found_count == 0, "Custom models should not be included without CUSTOM_API_URL"
@pytest.mark.skip(reason="Integration test - rely on simulator tests for API testing")
def test_all_providers_combined(self):
"""Test that all models are included when all providers are configured."""
pass
@pytest.mark.skip(reason="Integration test - rely on simulator tests for API testing")
def test_mixed_provider_combinations(self):
"""Test various mixed provider configurations."""
pass
def test_no_duplicates_with_overlapping_providers(self):
"""Test that models aren't duplicated when multiple providers offer the same model."""
self._setup_environment(
@@ -164,11 +142,6 @@ class TestModelEnumeration:
duplicates = {m: count for m, count in model_counts.items() if count > 1}
assert len(duplicates) == 0, f"Found duplicate models: {duplicates}"
@pytest.mark.skip(reason="Integration test - rely on simulator tests for API testing")
def test_schema_enum_matches_get_available_models(self):
"""Test that the schema enum matches what _get_available_models returns."""
pass
@pytest.mark.parametrize(
"model_name,should_exist",
[