Add DocGen tool with comprehensive documentation generation capabilities (#109)
* WIP: new workflow architecture * WIP: further improvements and cleanup * WIP: cleanup and docks, replace old tool with new * WIP: cleanup and docks, replace old tool with new * WIP: new planner implementation using workflow * WIP: precommit tool working as a workflow instead of a basic tool Support for passing False to use_assistant_model to skip external models completely and use Claude only * WIP: precommit workflow version swapped with old * WIP: codereview * WIP: replaced codereview * WIP: replaced codereview * WIP: replaced refactor * WIP: workflow for thinkdeep * WIP: ensure files get embedded correctly * WIP: thinkdeep replaced with workflow version * WIP: improved messaging when an external model's response is received * WIP: analyze tool swapped * WIP: updated tests * Extract only the content when building history * Use "relevant_files" for workflow tools only * WIP: updated tests * Extract only the content when building history * Use "relevant_files" for workflow tools only * WIP: fixed get_completion_next_steps_message missing param * Fixed tests Request for files consistently * Fixed tests Request for files consistently * Fixed tests * New testgen workflow tool Updated docs * Swap testgen workflow * Fix CI test failures by excluding API-dependent tests - Update GitHub Actions workflow to exclude simulation tests that require API keys - Fix collaboration tests to properly mock workflow tool expert analysis calls - Update test assertions to handle new workflow tool response format - Ensure unit tests run without external API dependencies in CI 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * WIP - Update tests to match new tools * WIP - Update tests to match new tools * WIP - Update tests to match new tools * Should help with https://github.com/BeehiveInnovations/zen-mcp-server/issues/97 Clear python cache when running script: https://github.com/BeehiveInnovations/zen-mcp-server/issues/96 Improved retry error logging Cleanup * WIP - chat tool using new architecture and improved code sharing * Removed todo * Removed todo * Cleanup old name * Tweak wordings * Tweak wordings Migrate old tests * Support for Flash 2.0 and Flash Lite 2.0 * Support for Flash 2.0 and Flash Lite 2.0 * Support for Flash 2.0 and Flash Lite 2.0 Fixed test * Improved consensus to use the workflow base class * Improved consensus to use the workflow base class * Allow images * Allow images * Replaced old consensus tool * Cleanup tests * Tests for prompt size * New tool: docgen Tests for prompt size Fixes: https://github.com/BeehiveInnovations/zen-mcp-server/issues/107 Use available token size limits: https://github.com/BeehiveInnovations/zen-mcp-server/issues/105 * Improved docgen prompt Exclude TestGen from pytest inclusion * Updated errors * Lint * DocGen instructed not to fix bugs, surface them and stick to d * WIP * Stop claude from being lazy and only documenting a small handful * More style rules --------- Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
committed by
GitHub
parent
0655590a51
commit
c960bcb720
62
CLAUDE.md
62
CLAUDE.md
@@ -20,9 +20,18 @@ This script automatically runs:
|
||||
- Ruff linting with auto-fix
|
||||
- Black code formatting
|
||||
- Import sorting with isort
|
||||
- Complete unit test suite
|
||||
- Complete unit test suite (excluding integration tests)
|
||||
- Verification that all checks pass 100%
|
||||
|
||||
**Run Integration Tests (requires API keys):**
|
||||
```bash
|
||||
# Run integration tests that make real API calls
|
||||
./run_integration_tests.sh
|
||||
|
||||
# Run integration tests + simulator tests
|
||||
./run_integration_tests.sh --with-simulator
|
||||
```
|
||||
|
||||
### Server Management
|
||||
|
||||
#### Setup/Update the Server
|
||||
@@ -160,8 +169,8 @@ Available simulator tests include:
|
||||
|
||||
#### Run Unit Tests Only
|
||||
```bash
|
||||
# Run all unit tests
|
||||
python -m pytest tests/ -v
|
||||
# Run all unit tests (excluding integration tests that require API keys)
|
||||
python -m pytest tests/ -v -m "not integration"
|
||||
|
||||
# Run specific test file
|
||||
python -m pytest tests/test_refactor.py -v
|
||||
@@ -170,26 +179,59 @@ python -m pytest tests/test_refactor.py -v
|
||||
python -m pytest tests/test_refactor.py::TestRefactorTool::test_format_response -v
|
||||
|
||||
# Run tests with coverage
|
||||
python -m pytest tests/ --cov=. --cov-report=html
|
||||
python -m pytest tests/ --cov=. --cov-report=html -m "not integration"
|
||||
```
|
||||
|
||||
#### Run Integration Tests (Uses Free Local Models)
|
||||
|
||||
**Setup Requirements:**
|
||||
```bash
|
||||
# 1. Install Ollama (if not already installed)
|
||||
# Visit https://ollama.ai or use brew install ollama
|
||||
|
||||
# 2. Start Ollama service
|
||||
ollama serve
|
||||
|
||||
# 3. Pull a model (e.g., llama3.2)
|
||||
ollama pull llama3.2
|
||||
|
||||
# 4. Set environment variable for custom provider
|
||||
export CUSTOM_API_URL="http://localhost:11434"
|
||||
```
|
||||
|
||||
**Run Integration Tests:**
|
||||
```bash
|
||||
# Run integration tests that make real API calls to local models
|
||||
python -m pytest tests/ -v -m "integration"
|
||||
|
||||
# Run specific integration test
|
||||
python -m pytest tests/test_prompt_regression.py::TestPromptIntegration::test_chat_normal_prompt -v
|
||||
|
||||
# Run all tests (unit + integration)
|
||||
python -m pytest tests/ -v
|
||||
```
|
||||
|
||||
**Note**: Integration tests use the local-llama model via Ollama, which is completely FREE to run unlimited times. Requires `CUSTOM_API_URL` environment variable set to your local Ollama endpoint. They can be run safely in CI/CD but are excluded from code quality checks to keep them fast.
|
||||
|
||||
### Development Workflow
|
||||
|
||||
#### Before Making Changes
|
||||
1. Ensure virtual environment is activated: `source venv/bin/activate`
|
||||
1. Ensure virtual environment is activated: `source .zen_venv/bin/activate`
|
||||
2. Run quality checks: `./code_quality_checks.sh`
|
||||
3. Check logs to ensure server is healthy: `tail -n 50 logs/mcp_server.log`
|
||||
|
||||
#### After Making Changes
|
||||
1. Run quality checks again: `./code_quality_checks.sh`
|
||||
2. Run relevant simulator tests: `python communication_simulator_test.py --individual <test_name>`
|
||||
3. Check logs for any issues: `tail -n 100 logs/mcp_server.log`
|
||||
4. Restart Claude session to use updated code
|
||||
2. Run integration tests locally: `./run_integration_tests.sh`
|
||||
3. Run relevant simulator tests: `python communication_simulator_test.py --individual <test_name>`
|
||||
4. Check logs for any issues: `tail -n 100 logs/mcp_server.log`
|
||||
5. Restart Claude session to use updated code
|
||||
|
||||
#### Before Committing/PR
|
||||
1. Final quality check: `./code_quality_checks.sh`
|
||||
2. Run full simulator test suite: `python communication_simulator_test.py`
|
||||
3. Verify all tests pass 100%
|
||||
2. Run integration tests: `./run_integration_tests.sh`
|
||||
3. Run full simulator test suite: `./run_integration_tests.sh --with-simulator`
|
||||
4. Verify all tests pass 100%
|
||||
|
||||
### Common Troubleshooting
|
||||
|
||||
|
||||
Reference in New Issue
Block a user