* WIP: new workflow architecture
* WIP: further improvements and cleanup
* WIP: cleanup and docks, replace old tool with new
* WIP: cleanup and docks, replace old tool with new
* WIP: new planner implementation using workflow
* WIP: precommit tool working as a workflow instead of a basic tool
Support for passing False to use_assistant_model to skip external models completely and use Claude only
* WIP: precommit workflow version swapped with old
* WIP: codereview
* WIP: replaced codereview
* WIP: replaced codereview
* WIP: replaced refactor
* WIP: workflow for thinkdeep
* WIP: ensure files get embedded correctly
* WIP: thinkdeep replaced with workflow version
* WIP: improved messaging when an external model's response is received
* WIP: analyze tool swapped
* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only
* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only
* WIP: fixed get_completion_next_steps_message missing param
* Fixed tests
Request for files consistently
* Fixed tests
Request for files consistently
* Fixed tests
* New testgen workflow tool
Updated docs
* Swap testgen workflow
* Fix CI test failures by excluding API-dependent tests
- Update GitHub Actions workflow to exclude simulation tests that require API keys
- Fix collaboration tests to properly mock workflow tool expert analysis calls
- Update test assertions to handle new workflow tool response format
- Ensure unit tests run without external API dependencies in CI
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* WIP - Update tests to match new tools
* WIP - Update tests to match new tools
* WIP - Update tests to match new tools
* Should help with https://github.com/BeehiveInnovations/zen-mcp-server/issues/97
Clear python cache when running script: https://github.com/BeehiveInnovations/zen-mcp-server/issues/96
Improved retry error logging
Cleanup
* WIP - chat tool using new architecture and improved code sharing
* Removed todo
* Removed todo
* Cleanup old name
* Tweak wordings
* Tweak wordings
Migrate old tests
* Support for Flash 2.0 and Flash Lite 2.0
* Support for Flash 2.0 and Flash Lite 2.0
* Support for Flash 2.0 and Flash Lite 2.0
Fixed test
* Improved consensus to use the workflow base class
* Improved consensus to use the workflow base class
* Allow images
* Allow images
* Replaced old consensus tool
* Cleanup tests
* Tests for prompt size
* New tool: docgen
Tests for prompt size
Fixes: https://github.com/BeehiveInnovations/zen-mcp-server/issues/107
Use available token size limits: https://github.com/BeehiveInnovations/zen-mcp-server/issues/105
* Improved docgen prompt
Exclude TestGen from pytest inclusion
* Updated errors
* Lint
* DocGen instructed not to fix bugs, surface them and stick to d
* WIP
* Stop claude from being lazy and only documenting a small handful
* More style rules
---------
Co-authored-by: Claude <noreply@anthropic.com>
Fix for: https://github.com/BeehiveInnovations/zen-mcp-server/issues/102
- Removed centralized MODEL_CAPABILITIES_DESC from config.py
- Added model descriptions to individual provider SUPPORTED_MODELS
- Updated _get_available_models() to use ModelProviderRegistry for API key filtering
- Added comprehensive test suite validating bug reproduction and fix
* fix: respect OPENROUTER_ALLOWED_MODELS in listmodels tool
- Modified listmodels tool to use provider's list_models() method with respect_restrictions=True
- This ensures only models allowed by OPENROUTER_ALLOWED_MODELS are shown
- Added note indicating when model restrictions are active
- Fixed total model count to also respect restrictions
Previously, the tool was directly accessing the OpenRouter registry and showing all ~200 models regardless of the OPENROUTER_ALLOWED_MODELS setting.
* test: add tests for listmodels OpenRouter restrictions
- Test that listmodels respects OPENROUTER_ALLOWED_MODELS setting
- Test shows only allowed models when restrictions are set
- Test shows all models when no restrictions are set
- Verify proper use of respect_restrictions parameter
* correcting test
* test: fix test expectations for listmodels
- Update tests to parse JSON response format
- Fix model counting logic to handle provider grouping
- Adjust expectations based on actual tool behavior (max 5 models per provider)
- Tests now properly validate both restricted and unrestricted scenarios
* style: fix code formatting issues
- Applied ruff, black, and isort formatting
- Fixed import order and removed trailing whitespace
- All code quality checks now pass
* fix: improve exception handling based on code review feedback
- Added proper logging for exceptions instead of silent pass
- Import logging module and create logger instance
- Log warnings when error checking OpenRouter restrictions
- Log warnings when error getting total available models
- Maintains backward compatibility while improving debuggability
---------
Co-authored-by: Patryk Ciechanski <patryk.ciechanski@inetum.com>
* WIP: new workflow architecture
* WIP: further improvements and cleanup
* WIP: cleanup and docks, replace old tool with new
* WIP: cleanup and docks, replace old tool with new
* WIP: new planner implementation using workflow
* WIP: precommit tool working as a workflow instead of a basic tool
Support for passing False to use_assistant_model to skip external models completely and use Claude only
* WIP: precommit workflow version swapped with old
* WIP: codereview
* WIP: replaced codereview
* WIP: replaced codereview
* WIP: replaced refactor
* WIP: workflow for thinkdeep
* WIP: ensure files get embedded correctly
* WIP: thinkdeep replaced with workflow version
* WIP: improved messaging when an external model's response is received
* WIP: analyze tool swapped
* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only
* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only
* WIP: fixed get_completion_next_steps_message missing param
* Fixed tests
Request for files consistently
* Fixed tests
Request for files consistently
* Fixed tests
* New testgen workflow tool
Updated docs
* Swap testgen workflow
* Fix CI test failures by excluding API-dependent tests
- Update GitHub Actions workflow to exclude simulation tests that require API keys
- Fix collaboration tests to properly mock workflow tool expert analysis calls
- Update test assertions to handle new workflow tool response format
- Ensure unit tests run without external API dependencies in CI
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* WIP - Update tests to match new tools
* WIP - Update tests to match new tools
---------
Co-authored-by: Claude <noreply@anthropic.com>
* Migration from docker to standalone server
Migration handling
Fixed tests
Use simpler in-memory storage
Support for concurrent logging to disk
Simplified direct connections to localhost
* Migration from docker / redis to standalone script
Updated tests
Updated run script
Fixed requirements
Use dotenv
Ask if user would like to install MCP in Claude Desktop once
Updated docs
* More cleanup and references to docker removed
* Cleanup
* Comments
* Fixed tests
* Fix GitHub Actions workflow for standalone Python architecture
- Install requirements-dev.txt for pytest and testing dependencies
- Remove Docker setup from simulation tests (now standalone)
- Simplify linting job to use requirements-dev.txt
- Update simulation tests to run directly without Docker
Fixes unit test failures in CI due to missing pytest dependency.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Remove simulation tests from GitHub Actions
- Removed simulation-tests job that makes real API calls
- Keep only unit tests (mocked, no API costs) and linting
- Simulation tests should be run manually with real API keys
- Reduces CI costs and complexity
GitHub Actions now only runs:
- Unit tests (569 tests, all mocked)
- Code quality checks (ruff, black)
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Fixed tests
* Fixed tests
---------
Co-authored-by: Claude <noreply@anthropic.com>
* WIP
Refactor resolving mode_names, should be done once at MCP call boundary
Pass around model context instead
Consensus tool allows one to get a consensus from multiple models, optionally assigning one a 'for' or 'against' stance to find nuanced responses.
* Deduplication of model resolution, model_context should be available before reaching deeper parts of the code
Improved abstraction when building conversations
Throw programmer errors early
* Guardrails
Support for `model:option` format at MCP boundary so future tools can use additional options if needed instead of handling this only for consensus
Model name now supports an optional ":option" for future use
* Simplified async flow
* Improved model for request to support natural language
Simplified async flow
* Improved model for request to support natural language
Simplified async flow
* Fix consensus tool async/sync patterns to match codebase standards
CRITICAL FIXES:
- Converted _get_consensus_responses from async to sync (matches other tools)
- Converted store_conversation_turn from async to sync (add_turn is synchronous)
- Removed unnecessary asyncio imports and sleep calls
- Fixed ClosedResourceError in MCP protocol during long consensus operations
PATTERN ALIGNMENT:
- Consensus tool now follows same sync patterns as all other tools
- Only execute() and prepare_prompt() are async (base class requirement)
- All internal operations are synchronous like analyze, chat, debug, etc.
TESTING:
- MCP simulation test now passes: consensus_stance ✅
- Two-model consensus works correctly in ~35 seconds
- Unknown stance handling defaults to neutral with warnings
- All 9 unit tests pass (100% success rate)
The consensus tool async patterns were anomalous in the codebase.
This fix aligns it with the established synchronous patterns used
by all other tools while maintaining full functionality.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Fixed call order and added new test
* Cleanup dead comments
Docs for the new tool
Improved tests
---------
Co-authored-by: Claude <noreply@anthropic.com>
Adds two comprehensive tests to prevent future regression of the parameter
order bug in `restriction_service.is_allowed()` calls:
1. `test_gemini_parameter_order_regression_protection` - Tests edge case
where only alias is allowed, ensuring correct parameter order
2. `test_gemini_parameter_order_edge_case_full_name_only` - Tests reverse
scenario where only full model name is allowed
These tests specifically catch the subtle bug where parameters were
incorrectly passed as (provider, user_input, resolved_name) instead of
(provider, resolved_name, user_input). The bug was masked by OR logic
in most cases but could cause issues in edge scenarios.
All 498 tests pass, including the new regression protection tests.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
New tool to list all models `listmodels`
Integration test to for all the different combinations of API keys
Tweaks to codereview prompt for a better quality input from Claude
Fixed missing 'low' severity in codereview
Following the established testing patterns from other tool tests:
- Removed mocking of providers and capabilities
- Use real provider resolution with dummy API keys
- Expect proper validation behavior or provider-not-found errors
- Applied proper Redis mocking for conversation memory tests
- Simplified validation tests to focus on core functionality
- All 473 tests now pass 100% including 13 image support tests
This ensures CI/CD compatibility and follows the proven testing approach
used throughout the codebase for tool integration testing.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Properly mock Redis client operations to support add_turn functionality
- Set up initial thread contexts so add_turn can find existing threads
- Mock Redis set operations to return success
- Ensure all Redis-dependent tests use proper mock patterns
- All 473 unit tests now pass 100% with proper isolation
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add proper Redis client mocking to prevent connection attempts during CI
- Apply @patch("utils.conversation_memory.get_redis_client") decorators to all methods using Redis
- Mock thread contexts for get_thread calls to ensure tests work without Redis
- Fixes GitHub Actions failures: ConnectionRefusedError when connecting to localhost:6379
- Maintains test isolation and proper mock patterns used throughout test suite
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fixed worst flake8 violations (300-600+ character lines) in tools directory
- Applied consistent multi-line string formatting for better readability
- Removed incompatible test files from main branch merge
- All 473 tests passing, all code quality checks pass
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Kept version 4.8.0 for new features
- Preserved our _is_builtin_custom_models_config approach over main's ALLOWED_INTERNAL_PATHS
- Our targeted solution is cleaner than the general whitelist approach
Fixed MagicMock comparison errors across multiple test suites by:
- Adding proper ModelCapabilities mocks with real values instead of MagicMock objects
- Updating test_auto_mode.py with correct provider mocking for model availability tests
- Updating test_thinking_modes.py with proper capabilities mocking in all thinking mode tests
- Updating test_tools.py with proper capabilities mocking for CodeReview and Analyze tools
- Fixing test_large_prompt_handling.py by adding proper provider mocking to prevent errors before large prompt detection
Fixed pytest collection warnings by:
- Renaming TestGenRequest to TestGenerationRequest to avoid pytest collecting it as a test class
- Renaming TestGenTool to TestGenerationTool to avoid pytest collecting it as a test class
- Updated all imports and references across server.py, tools/__init__.py, and test files
All 459 tests now pass without warnings or MagicMock comparison errors.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Exit early at MCP boundary if files won't fit within given context of chosen model
- Encourage claude to re-run with better context
- Check file sizes before embedding
- Drop files from older conversations when building continuations and give priority to newer files
- List and mention excluded files to Claude on return
- Improved tests
- Improved precommit prompt
- Added a new Low severity to precommit
- Improved documentation of file embedding strategy
- Refactor