Add DocGen tool with comprehensive documentation generation capabilities (#109)

* WIP: new workflow architecture

* WIP: further improvements and cleanup

* WIP: cleanup and docks, replace old tool with new

* WIP: cleanup and docks, replace old tool with new

* WIP: new planner implementation using workflow

* WIP: precommit tool working as a workflow instead of a basic tool
Support for passing False to use_assistant_model to skip external models completely and use Claude only

* WIP: precommit workflow version swapped with old

* WIP: codereview

* WIP: replaced codereview

* WIP: replaced codereview

* WIP: replaced refactor

* WIP: workflow for thinkdeep

* WIP: ensure files get embedded correctly

* WIP: thinkdeep replaced with workflow version

* WIP: improved messaging when an external model's response is received

* WIP: analyze tool swapped

* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only

* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only

* WIP: fixed get_completion_next_steps_message missing param

* Fixed tests
Request for files consistently

* Fixed tests
Request for files consistently

* Fixed tests

* New testgen workflow tool
Updated docs

* Swap testgen workflow

* Fix CI test failures by excluding API-dependent tests

- Update GitHub Actions workflow to exclude simulation tests that require API keys
- Fix collaboration tests to properly mock workflow tool expert analysis calls
- Update test assertions to handle new workflow tool response format
- Ensure unit tests run without external API dependencies in CI

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* WIP - Update tests to match new tools

* WIP - Update tests to match new tools

* WIP - Update tests to match new tools

* Should help with https://github.com/BeehiveInnovations/zen-mcp-server/issues/97
Clear python cache when running script: https://github.com/BeehiveInnovations/zen-mcp-server/issues/96
Improved retry error logging
Cleanup

* WIP - chat tool using new architecture and improved code sharing

* Removed todo

* Removed todo

* Cleanup old name

* Tweak wordings

* Tweak wordings
Migrate old tests

* Support for Flash 2.0 and Flash Lite 2.0

* Support for Flash 2.0 and Flash Lite 2.0

* Support for Flash 2.0 and Flash Lite 2.0
Fixed test

* Improved consensus to use the workflow base class

* Improved consensus to use the workflow base class

* Allow images

* Allow images

* Replaced old consensus tool

* Cleanup tests

* Tests for prompt size

* New tool: docgen
Tests for prompt size
Fixes: https://github.com/BeehiveInnovations/zen-mcp-server/issues/107
Use available token size limits: https://github.com/BeehiveInnovations/zen-mcp-server/issues/105

* Improved docgen prompt
Exclude TestGen from pytest inclusion

* Updated errors

* Lint

* DocGen instructed not to fix bugs, surface them and stick to d

* WIP

* Stop claude from being lazy and only documenting a small handful

* More style rules

---------

Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
Beehive Innovations
2025-06-21 23:21:19 -07:00
committed by GitHub
parent 0655590a51
commit c960bcb720
58 changed files with 5492 additions and 5558 deletions

90
run_integration_tests.sh Executable file
View File

@@ -0,0 +1,90 @@
#!/bin/bash
# Zen MCP Server - Run Integration Tests
# This script runs integration tests that require API keys
# Run this locally on your Mac to ensure everything works end-to-end
set -e # Exit on any error
echo "🧪 Running Integration Tests for Zen MCP Server"
echo "=============================================="
echo "These tests use real API calls with your configured keys"
echo ""
# Activate virtual environment
if [[ -f ".zen_venv/bin/activate" ]]; then
source .zen_venv/bin/activate
echo "✅ Using virtual environment"
else
echo "❌ No virtual environment found!"
echo "Please run: ./run-server.sh first"
exit 1
fi
# Check for .env file
if [[ ! -f ".env" ]]; then
echo "⚠️ Warning: No .env file found. Integration tests may fail without API keys."
echo ""
fi
echo "🔑 Checking API key availability:"
echo "---------------------------------"
# Check which API keys are available
if [[ -n "$GEMINI_API_KEY" ]] || grep -q "GEMINI_API_KEY=" .env 2>/dev/null; then
echo "✅ GEMINI_API_KEY configured"
else
echo "❌ GEMINI_API_KEY not found"
fi
if [[ -n "$OPENAI_API_KEY" ]] || grep -q "OPENAI_API_KEY=" .env 2>/dev/null; then
echo "✅ OPENAI_API_KEY configured"
else
echo "❌ OPENAI_API_KEY not found"
fi
if [[ -n "$XAI_API_KEY" ]] || grep -q "XAI_API_KEY=" .env 2>/dev/null; then
echo "✅ XAI_API_KEY configured"
else
echo "❌ XAI_API_KEY not found"
fi
if [[ -n "$OPENROUTER_API_KEY" ]] || grep -q "OPENROUTER_API_KEY=" .env 2>/dev/null; then
echo "✅ OPENROUTER_API_KEY configured"
else
echo "❌ OPENROUTER_API_KEY not found"
fi
if [[ -n "$CUSTOM_API_URL" ]] || grep -q "CUSTOM_API_URL=" .env 2>/dev/null; then
echo "✅ CUSTOM_API_URL configured (local models)"
else
echo "❌ CUSTOM_API_URL not found"
fi
echo ""
# Run integration tests
echo "🏃 Running integration tests..."
echo "------------------------------"
# Run only integration tests (marked with @pytest.mark.integration)
python -m pytest tests/ -v -m "integration" --tb=short
echo ""
echo "✅ Integration tests completed!"
echo ""
# Also run simulator tests if requested
if [[ "$1" == "--with-simulator" ]]; then
echo "🤖 Running simulator tests..."
echo "----------------------------"
python communication_simulator_test.py --verbose
echo ""
echo "✅ Simulator tests completed!"
fi
echo "💡 Tips:"
echo "- Run './run_integration_tests.sh' for integration tests only"
echo "- Run './run_integration_tests.sh --with-simulator' to also run simulator tests"
echo "- Run './code_quality_checks.sh' for unit tests and linting"
echo "- Check logs in logs/mcp_server.log if tests fail"