Fix for: https://github.com/BeehiveInnovations/zen-mcp-server/issues/102
- Removed centralized MODEL_CAPABILITIES_DESC from config.py
- Added model descriptions to individual provider SUPPORTED_MODELS
- Updated _get_available_models() to use ModelProviderRegistry for API key filtering
- Added comprehensive test suite validating bug reproduction and fix
* WIP: new workflow architecture
* WIP: further improvements and cleanup
* WIP: cleanup and docks, replace old tool with new
* WIP: cleanup and docks, replace old tool with new
* WIP: new planner implementation using workflow
* WIP: precommit tool working as a workflow instead of a basic tool
Support for passing False to use_assistant_model to skip external models completely and use Claude only
* WIP: precommit workflow version swapped with old
* WIP: codereview
* WIP: replaced codereview
* WIP: replaced codereview
* WIP: replaced refactor
* WIP: workflow for thinkdeep
* WIP: ensure files get embedded correctly
* WIP: thinkdeep replaced with workflow version
* WIP: improved messaging when an external model's response is received
* WIP: analyze tool swapped
* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only
* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only
* WIP: fixed get_completion_next_steps_message missing param
* Fixed tests
Request for files consistently
* Fixed tests
Request for files consistently
* Fixed tests
* New testgen workflow tool
Updated docs
* Swap testgen workflow
* Fix CI test failures by excluding API-dependent tests
- Update GitHub Actions workflow to exclude simulation tests that require API keys
- Fix collaboration tests to properly mock workflow tool expert analysis calls
- Update test assertions to handle new workflow tool response format
- Ensure unit tests run without external API dependencies in CI
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* WIP - Update tests to match new tools
* WIP - Update tests to match new tools
---------
Co-authored-by: Claude <noreply@anthropic.com>
* Migration from docker to standalone server
Migration handling
Fixed tests
Use simpler in-memory storage
Support for concurrent logging to disk
Simplified direct connections to localhost
* Migration from docker / redis to standalone script
Updated tests
Updated run script
Fixed requirements
Use dotenv
Ask if user would like to install MCP in Claude Desktop once
Updated docs
* More cleanup and references to docker removed
* Cleanup
* Comments
* Fixed tests
* Fix GitHub Actions workflow for standalone Python architecture
- Install requirements-dev.txt for pytest and testing dependencies
- Remove Docker setup from simulation tests (now standalone)
- Simplify linting job to use requirements-dev.txt
- Update simulation tests to run directly without Docker
Fixes unit test failures in CI due to missing pytest dependency.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Remove simulation tests from GitHub Actions
- Removed simulation-tests job that makes real API calls
- Keep only unit tests (mocked, no API costs) and linting
- Simulation tests should be run manually with real API keys
- Reduces CI costs and complexity
GitHub Actions now only runs:
- Unit tests (569 tests, all mocked)
- Code quality checks (ruff, black)
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Fixed tests
* Fixed tests
---------
Co-authored-by: Claude <noreply@anthropic.com>
* WIP
Refactor resolving mode_names, should be done once at MCP call boundary
Pass around model context instead
Consensus tool allows one to get a consensus from multiple models, optionally assigning one a 'for' or 'against' stance to find nuanced responses.
* Deduplication of model resolution, model_context should be available before reaching deeper parts of the code
Improved abstraction when building conversations
Throw programmer errors early
* Guardrails
Support for `model:option` format at MCP boundary so future tools can use additional options if needed instead of handling this only for consensus
Model name now supports an optional ":option" for future use
* Simplified async flow
* Improved model for request to support natural language
Simplified async flow
* Improved model for request to support natural language
Simplified async flow
* Fix consensus tool async/sync patterns to match codebase standards
CRITICAL FIXES:
- Converted _get_consensus_responses from async to sync (matches other tools)
- Converted store_conversation_turn from async to sync (add_turn is synchronous)
- Removed unnecessary asyncio imports and sleep calls
- Fixed ClosedResourceError in MCP protocol during long consensus operations
PATTERN ALIGNMENT:
- Consensus tool now follows same sync patterns as all other tools
- Only execute() and prepare_prompt() are async (base class requirement)
- All internal operations are synchronous like analyze, chat, debug, etc.
TESTING:
- MCP simulation test now passes: consensus_stance ✅
- Two-model consensus works correctly in ~35 seconds
- Unknown stance handling defaults to neutral with warnings
- All 9 unit tests pass (100% success rate)
The consensus tool async patterns were anomalous in the codebase.
This fix aligns it with the established synchronous patterns used
by all other tools while maintaining full functionality.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Fixed call order and added new test
* Cleanup dead comments
Docs for the new tool
Improved tests
---------
Co-authored-by: Claude <noreply@anthropic.com>
New tool to list all models `listmodels`
Integration test to for all the different combinations of API keys
Tweaks to codereview prompt for a better quality input from Claude
Fixed missing 'low' severity in codereview
New test to confirm history build-up and system prompt does not affect prompt size checks
Also check for large prompts in focus_on
Fixed .env.example incorrectly did not comment out CUSTOM_API causing the run-server script to think at least one key exists
Supports decomposing large components and files, finding codesmells, finding modernizing opportunities as well as code organization opportunities. Fix this mega-classes today!
Line numbers added to embedded code for better references from model -> claude
- Added o3-pro model configuration to custom_models.json with 200K context
- Updated OpenAI provider to support o3-pro with fixed temperature constraint
- Extended simulator tests to include o3-pro validation scenarios
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Further fixes to tests
Pass O3 simulation test when keys are not set, along with a notice
Updated docs on testing, simulation tests / contributing
Support for OpenAI o4-mini and o4-mini-high
Encourage Claude to pick the best model for the job automatically in auto mode
Lots of new tests to ensure automatic model picking works reliably based on user preference or when a matching model is not found or ambiguous
Improved error reporting when bogus model is requested and is not configured or available
* Support for Custom URLs and custom models, including locally hosted models such as ollama
* Support for native + openrouter + local models (i.e. dozens of models) means you can start delegating sub-tasks to particular models or work to local models such as localizations or other boring work etc.
* Several tests added
* precommit to also include untracked (new) files
* Logfile auto rollover
* Improved logging