- Created build.sh script for building the Docker image with environment variable checks.
- Added deploy.sh script for deploying the Zen MCP Server with health checks and logging.
- Implemented healthcheck.py to verify server process, Python imports, log directory, and environment variables.
- Developed comprehensive tests for Docker configuration, environment validation, and integration with MCP.
- Included performance tests for Docker image size and startup time.
- Added validation script tests to ensure proper Docker and MCP setup.
* feat: Enhance run-server.sh with uv-first Python environment setup
- Implement uv-first approach for faster environment setup when available
- Add automatic WSL environment detection using wslvar for better Windows integration
- Improve system package installation by using bash -c for better command execution
- Add comprehensive environment validation and fallback mechanisms
- Optimize dependency installation with uv when environment supports it
- Enhance configuration display workflow for better user experience
- Add environment markers to track uv-created environments
- Improve error handling and user messaging throughout setup process
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* feat: Add .claude/settings.local.json to .gitignore
Personal Claude Code settings should not be tracked in source control.
Reference: https://docs.anthropic.com/en/docs/claude-code/settings🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* feat: Add shared Claude Code settings.json for team defaults
- Add .claude/settings.json with default permissions for team use
- Remove .claude/settings.local.json from git tracking (now personal config)
Reference: https://docs.anthropic.com/en/docs/claude-code/settings🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Address Gemini Code Assist review issues in run-server.sh
- Fix hardcoded Unix paths (lines 563 & 568) with cross-platform detection
- Improve error handling by capturing uv stderr instead of /dev/null
- Fix uv environment detection logic to check uv_created marker file
Resolves three issues identified in PR review:
1. High Priority: Replace hardcoded $VENV_PATH/bin/python with detection
2. Medium Priority: Capture and display uv command errors for debugging
3. Medium Priority: Use marker file instead of path matching for uv detection
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Address additional Gemini Code Assist feedback
- Enhance WSL warning message with more helpful guidance
- Add security comment explaining bash -c usage over eval
- Add get_venv_python_path helper function for cleaner cross-platform detection
- Improve path resolution error handling with proper error checking
Addresses 4 additional review points from gemini-code-assist to improve
user experience, code clarity, and error handling robustness.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Description
This PR adds support for selectively disabling tools via the DISABLED_TOOLS environment variable, allowing users to customize which MCP tools are available in their Zen server instance. This feature enables better control over tool availability for security, performance, or organizational requirements.
## Changes Made
- [x] Added `DISABLED_TOOLS` environment variable support to selectively disable tools
- [x] Implemented tool filtering logic with protection for essential tools (version, listmodels)
- [x] Added comprehensive validation with warnings for unknown tools and attempts to disable essential tools
- [x] Updated `.env.example` with DISABLED_TOOLS documentation and examples
- [x] Added comprehensive test suite (16 tests) covering all edge cases
- [x] No breaking changes - feature is opt-in with default behavior unchanged
## Configuration
Add to `.env` file:
```bash
# Optional: Tool Selection
# Comma-separated list of tools to disable. If not set, all tools are enabled.
# Essential tools (version, listmodels) cannot be disabled.
# Available tools: chat, thinkdeep, planner, consensus, codereview, precommit,
# debug, docgen, analyze, refactor, tracer, testgen
# Examples:
# DISABLED_TOOLS= # All tools enabled (default)
# DISABLED_TOOLS=debug,tracer # Disable debug and tracer tools
# DISABLED_TOOLS=planner,consensus # Disable planning tools
Moved aliases as part of SUPPORTED_MODELS instead of shorthand, more in line with how custom_models are declared
Further refactoring to cleanup some code
## Description
This PR implements a new [DIAL](https://dialx.ai/dial_api) (Data & AI Layer) provider for the Zen MCP Server, enabling unified access to multiple AI models through the DIAL API platform. DIAL provides enterprise-grade AI model access with deployment-specific routing similar to Azure OpenAI.
## Changes Made
- [x] Added support of atexit:
- Ensures automatic cleanup of provider resources (HTTP clients, connection pools) on server shutdown
- Fixed bug using ModelProviderRegistry.get_available_providers() instead of accessing private _providers
- Works with SIGTERM/Ctrl+C for graceful shutdown in both development and containerized environments
- [x] Added new DIAL provider (`providers/dial.py`) inheriting from `OpenAICompatibleProvider`
- [x] Updated server.py to register DIAL provider during initialization
- [x] Updated provider registry to include DIAL provider type
- [x] Implemented deployment-specific routing for DIAL's Azure OpenAI-style endpoints
- [x] Implemented performance optimizations:
- Connection pooling with httpx for better performance
- Thread-safe client caching with double-check locking pattern
- Proper resource cleanup with `close()` method
- [x] Added comprehensive unit tests with 16 test cases (`tests/test_dial_provider.py`)
- [x] Added DIAL configuration to `.env.example` with documentation
- [x] Added support for configurable API version via `DIAL_API_VERSION` environment variable
- [x] Added DIAL model restrictions support via `DIAL_ALLOWED_MODELS` environment variable
### Supported DIAL Models:
- OpenAI models: o3, o4-mini (and their dated versions)
- Google models: gemini-2.5-pro, gemini-2.5-flash (including search variant)
- Anthropic models: Claude 4 Opus/Sonnet (with and without thinking mode)
### Environment Variables:
- `DIAL_API_KEY`: Required API key for DIAL authentication
- `DIAL_API_HOST`: Optional base URL (defaults to https://core.dialx.ai)
- `DIAL_API_VERSION`: Optional API version header (defaults to 2025-01-01-preview)
- `DIAL_ALLOWED_MODELS`: Optional comma-separated list of allowed models
### Breaking Changes:
- None
### Dependencies:
- No new dependencies added (uses existing OpenAI SDK with custom routing)
* feat: Update Claude model references from v3 to v4
- Update model configurations from claude-3-opus to claude-4-opus
- Update model configurations from claude-3-sonnet to claude-4-sonnet
- Maintain backward compatibility through existing aliases (opus, sonnet, claude)
- Update provider registry preferred models list
- Update all test cases and assertions to reflect new model names
- Update documentation and examples consistently across all files
- Add Claude 4 model support while preserving existing functionality
Files modified: 15 (config, docs, providers, tests, tools)
Pattern: Systematic claude-3-* → claude-4-* model reference migration
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* PR feedback: changed anthropic/claude-4-opus -> anthropic/claude-opus-4 and anthropic/claude-4-haiku -> anthropic/claude-3.5-haiku
* changed anthropic/claude-4-sonnet -> anthropic/claude-sonnet-4
* PR feedback removed specific model from test mock
* PR feedback removed base.py
---------
Co-authored-by: Omry Nachman <omry@wix.com>
Co-authored-by: Claude <noreply@anthropic.com>
Description: This feature adds support for UTF-8 encoding in JSON responses, allowing for proper handling of special characters and emojis.
- Implement unit tests for UTF-8 encoding in various model providers including Gemini, OpenAI, and OpenAI Compatible.
- Validate UTF-8 support in token counting, content generation, and error handling.
- Introduce tests for JSON serialization ensuring proper handling of French characters and emojis.
- Create tests for language instruction generation based on locale settings.
- Validate UTF-8 handling in workflow tools including AnalyzeTool, CodereviewTool, and DebugIssueTool.
- Ensure that all tests check for correct UTF-8 character preservation and proper JSON formatting.
- Add integration tests to verify the interaction between locale settings and model responses.
* fix: remove duplicate version tool registration
The version tool was appearing twice in the MCP tool list due to:
- VersionTool class properly registered in TOOLS dictionary (line 181)
- Hardcoded Tool() registration in handle_list_tools() (lines 451-462)
This duplicate was leftover from the architectural migration:
- June 8, 2025: Original hardcoded "get_version" tool added
- June 14, 2025: Renamed from "get_version" to "version"
- June 21, 2025: VersionTool class added during workflow architecture migration
- The old hardcoded registration was never removed
The hardcoded registration has been removed since VersionTool provides
identical functionality through the proper architecture.
Fixes: BeehiveInnovations/zen-mcp-server#120
* fix: complete removal of legacy version tool code
Following up on the duplicate version tool fix, this commit removes
all remaining dead code identified by Gemini Code Assist:
- Removed dead elif block for version tool (lines 639-643)
This block was unreachable since version is handled by TOOLS registry
- Removed orphaned handle_version() function (lines 942-1030)
No longer called after elif block removal
- Fixed imports: removed unused __author__ and __updated__ imports
These were remnants from the June 2025 migration from function-based
to class-based tools. The VersionTool class now handles all version
functionality through the standard tool architecture.
All 546 tests pass - no functional changes.
Related to: BeehiveInnovations/zen-mcp-server#120
* Fix model metadata preservation when using continuation_id
When continuing a conversation without specifying a model, the system now
correctly retrieves and uses the model from the previous assistant turn
instead of defaulting to DEFAULT_MODEL. This ensures model continuity
across conversation turns and fixes the metadata mismatch issue.
The fix:
- In reconstruct_thread_context(), check for previous assistant turns
- If no model is specified in the continuation request, use the model
from the most recent assistant turn
- This preserves the model choice across conversation continuations
Added comprehensive tests to verify the fix handles:
- Single turn conversations
- Multiple turns with different models
- No previous assistant turns (falls back to DEFAULT_MODEL)
- Explicit model specification (overrides previous turn)
- Thread chain relationships
Fixes issue where continuation metadata would incorrectly report
'llama3.2' instead of the actual model used (e.g., 'deepseek-r1-8b')
* Update test to reference issue #111
* Refactor tests to call reconstruct_thread_context directly
Address Gemini Code Assist feedback by removing duplicated implementation
logic from tests. Tests now call the actual function with proper mocking
instead of reimplementing the model retrieval logic.
This improves maintainability and ensures tests validate actual behavior
rather than their own copy of the logic.
* WIP: new workflow architecture
* WIP: further improvements and cleanup
* WIP: cleanup and docks, replace old tool with new
* WIP: cleanup and docks, replace old tool with new
* WIP: new planner implementation using workflow
* WIP: precommit tool working as a workflow instead of a basic tool
Support for passing False to use_assistant_model to skip external models completely and use Claude only
* WIP: precommit workflow version swapped with old
* WIP: codereview
* WIP: replaced codereview
* WIP: replaced codereview
* WIP: replaced refactor
* WIP: workflow for thinkdeep
* WIP: ensure files get embedded correctly
* WIP: thinkdeep replaced with workflow version
* WIP: improved messaging when an external model's response is received
* WIP: analyze tool swapped
* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only
* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only
* WIP: fixed get_completion_next_steps_message missing param
* Fixed tests
Request for files consistently
* Fixed tests
Request for files consistently
* Fixed tests
* New testgen workflow tool
Updated docs
* Swap testgen workflow
* Fix CI test failures by excluding API-dependent tests
- Update GitHub Actions workflow to exclude simulation tests that require API keys
- Fix collaboration tests to properly mock workflow tool expert analysis calls
- Update test assertions to handle new workflow tool response format
- Ensure unit tests run without external API dependencies in CI
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* WIP - Update tests to match new tools
* WIP - Update tests to match new tools
* WIP - Update tests to match new tools
* Should help with https://github.com/BeehiveInnovations/zen-mcp-server/issues/97
Clear python cache when running script: https://github.com/BeehiveInnovations/zen-mcp-server/issues/96
Improved retry error logging
Cleanup
* WIP - chat tool using new architecture and improved code sharing
* Removed todo
* Removed todo
* Cleanup old name
* Tweak wordings
* Tweak wordings
Migrate old tests
* Support for Flash 2.0 and Flash Lite 2.0
* Support for Flash 2.0 and Flash Lite 2.0
* Support for Flash 2.0 and Flash Lite 2.0
Fixed test
* Improved consensus to use the workflow base class
* Improved consensus to use the workflow base class
* Allow images
* Allow images
* Replaced old consensus tool
* Cleanup tests
* Tests for prompt size
* New tool: docgen
Tests for prompt size
Fixes: https://github.com/BeehiveInnovations/zen-mcp-server/issues/107
Use available token size limits: https://github.com/BeehiveInnovations/zen-mcp-server/issues/105
* Improved docgen prompt
Exclude TestGen from pytest inclusion
* Updated errors
* Lint
* DocGen instructed not to fix bugs, surface them and stick to d
* WIP
* Stop claude from being lazy and only documenting a small handful
* More style rules
---------
Co-authored-by: Claude <noreply@anthropic.com>
Fix for: https://github.com/BeehiveInnovations/zen-mcp-server/issues/102
- Removed centralized MODEL_CAPABILITIES_DESC from config.py
- Added model descriptions to individual provider SUPPORTED_MODELS
- Updated _get_available_models() to use ModelProviderRegistry for API key filtering
- Added comprehensive test suite validating bug reproduction and fix