Consolidates duplicated image validation logic from individual providers
into a reusable base class method. This improves maintainability and
ensures consistent validation across all providers.
- Added validate_image() method to ModelProvider base class
- Supports both file paths and data URLs
- Validates image format, size, and MIME types
- Added DEFAULT_MAX_IMAGE_SIZE_MB class constant (20MB)
- Refactored Gemini and OpenAI providers to use base validation
- Added comprehensive test suite with 19 tests
- Used minimal mocking approach with concrete test provider class
- Fix lint errors: trailing whitespace and deprecated typing imports
- Update test mock for o3-pro response format (output.content[] → output_text)
- Implement robust test isolation with monkeypatch fixture
- Clear provider registry cache to prevent test interference
- Ensure o3-pro tests pass in both individual and full suite execution
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add o3_pro_basic_math.json cassette for test_o3_pro_output_text_fix.py
- Remove unused o3_pro_content_capture.json cassette
- This allows tests to run without API keys in CI/CD
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix o3-pro response parsing to use output_text convenience field
- Replace respx with custom httpx transport solution for better reliability
- Implement comprehensive PII sanitization to prevent secret exposure
- Add HTTP request/response recording with cassette format for testing
- Sanitize all existing cassettes to remove exposed API keys
- Update documentation to reflect new HTTP transport recorder
- Add test suite for PII sanitization and HTTP recording
This change:
1. Fixes timeout issues with o3-pro API calls (was 2+ minutes, now ~15-22 seconds)
2. Properly captures response content without httpx.ResponseNotRead exceptions
3. Preserves original HTTP response format including gzip compression
4. Prevents future secret exposure with automatic PII sanitization
5. Enables reliable replay testing for o3-pro interactions
Co-Authored-By: Claude <noreply@anthropic.com>
- Introduced tests for Docker deployment scripts to ensure existence, permissions, and proper command usage.
- Added tests for Docker integration with Claude Desktop, validating MCP configuration and command formats.
- Implemented health check tests for Docker, ensuring script functionality and proper configuration in Docker setup.
- Created tests for Docker MCP validation, focusing on command validation and security configurations.
- Developed security tests for Docker configurations, checking for non-root user setups, privilege restrictions, and sensitive data handling.
- Added volume persistence tests to ensure configuration and logs are correctly managed across container runs.
- Updated .dockerignore to exclude sensitive files and added relevant tests for Docker secrets handling.
Added new confidence values (very_high, almost_certain) to all workflow tools
to provide more granular confidence tracking. Updated enum declarations in:
- analyze.py, codereview.py, debug.py, precommit.py, secaudit.py, testgen.py
- Updated debug.py's get_required_actions to handle new confidence values
- All tools now use consistent 7-value confidence scale
- refactor.py kept its unique scale (exploring/incomplete/partial/complete)
Also fixed model thinking configuration:
- Added very_high and almost_certain to MODEL_THINKING_PREFERENCES
- Set medium thinking for very_high, high thinking for almost_certain
- Updated prompts to clarify certain means 100% local confidence
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Created build.sh script for building the Docker image with environment variable checks.
- Added deploy.sh script for deploying the Zen MCP Server with health checks and logging.
- Implemented healthcheck.py to verify server process, Python imports, log directory, and environment variables.
- Developed comprehensive tests for Docker configuration, environment validation, and integration with MCP.
- Included performance tests for Docker image size and startup time.
- Added validation script tests to ensure proper Docker and MCP setup.
## Description
This PR adds support for selectively disabling tools via the DISABLED_TOOLS environment variable, allowing users to customize which MCP tools are available in their Zen server instance. This feature enables better control over tool availability for security, performance, or organizational requirements.
## Changes Made
- [x] Added `DISABLED_TOOLS` environment variable support to selectively disable tools
- [x] Implemented tool filtering logic with protection for essential tools (version, listmodels)
- [x] Added comprehensive validation with warnings for unknown tools and attempts to disable essential tools
- [x] Updated `.env.example` with DISABLED_TOOLS documentation and examples
- [x] Added comprehensive test suite (16 tests) covering all edge cases
- [x] No breaking changes - feature is opt-in with default behavior unchanged
## Configuration
Add to `.env` file:
```bash
# Optional: Tool Selection
# Comma-separated list of tools to disable. If not set, all tools are enabled.
# Essential tools (version, listmodels) cannot be disabled.
# Available tools: chat, thinkdeep, planner, consensus, codereview, precommit,
# debug, docgen, analyze, refactor, tracer, testgen
# Examples:
# DISABLED_TOOLS= # All tools enabled (default)
# DISABLED_TOOLS=debug,tracer # Disable debug and tracer tools
# DISABLED_TOOLS=planner,consensus # Disable planning tools
Moved aliases as part of SUPPORTED_MODELS instead of shorthand, more in line with how custom_models are declared
Further refactoring to cleanup some code
## Description
This PR implements a new [DIAL](https://dialx.ai/dial_api) (Data & AI Layer) provider for the Zen MCP Server, enabling unified access to multiple AI models through the DIAL API platform. DIAL provides enterprise-grade AI model access with deployment-specific routing similar to Azure OpenAI.
## Changes Made
- [x] Added support of atexit:
- Ensures automatic cleanup of provider resources (HTTP clients, connection pools) on server shutdown
- Fixed bug using ModelProviderRegistry.get_available_providers() instead of accessing private _providers
- Works with SIGTERM/Ctrl+C for graceful shutdown in both development and containerized environments
- [x] Added new DIAL provider (`providers/dial.py`) inheriting from `OpenAICompatibleProvider`
- [x] Updated server.py to register DIAL provider during initialization
- [x] Updated provider registry to include DIAL provider type
- [x] Implemented deployment-specific routing for DIAL's Azure OpenAI-style endpoints
- [x] Implemented performance optimizations:
- Connection pooling with httpx for better performance
- Thread-safe client caching with double-check locking pattern
- Proper resource cleanup with `close()` method
- [x] Added comprehensive unit tests with 16 test cases (`tests/test_dial_provider.py`)
- [x] Added DIAL configuration to `.env.example` with documentation
- [x] Added support for configurable API version via `DIAL_API_VERSION` environment variable
- [x] Added DIAL model restrictions support via `DIAL_ALLOWED_MODELS` environment variable
### Supported DIAL Models:
- OpenAI models: o3, o4-mini (and their dated versions)
- Google models: gemini-2.5-pro, gemini-2.5-flash (including search variant)
- Anthropic models: Claude 4 Opus/Sonnet (with and without thinking mode)
### Environment Variables:
- `DIAL_API_KEY`: Required API key for DIAL authentication
- `DIAL_API_HOST`: Optional base URL (defaults to https://core.dialx.ai)
- `DIAL_API_VERSION`: Optional API version header (defaults to 2025-01-01-preview)
- `DIAL_ALLOWED_MODELS`: Optional comma-separated list of allowed models
### Breaking Changes:
- None
### Dependencies:
- No new dependencies added (uses existing OpenAI SDK with custom routing)
* feat: Update Claude model references from v3 to v4
- Update model configurations from claude-3-opus to claude-4-opus
- Update model configurations from claude-3-sonnet to claude-4-sonnet
- Maintain backward compatibility through existing aliases (opus, sonnet, claude)
- Update provider registry preferred models list
- Update all test cases and assertions to reflect new model names
- Update documentation and examples consistently across all files
- Add Claude 4 model support while preserving existing functionality
Files modified: 15 (config, docs, providers, tests, tools)
Pattern: Systematic claude-3-* → claude-4-* model reference migration
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* PR feedback: changed anthropic/claude-4-opus -> anthropic/claude-opus-4 and anthropic/claude-4-haiku -> anthropic/claude-3.5-haiku
* changed anthropic/claude-4-sonnet -> anthropic/claude-sonnet-4
* PR feedback removed specific model from test mock
* PR feedback removed base.py
---------
Co-authored-by: Omry Nachman <omry@wix.com>
Co-authored-by: Claude <noreply@anthropic.com>
Description: This feature adds support for UTF-8 encoding in JSON responses, allowing for proper handling of special characters and emojis.
- Implement unit tests for UTF-8 encoding in various model providers including Gemini, OpenAI, and OpenAI Compatible.
- Validate UTF-8 support in token counting, content generation, and error handling.
- Introduce tests for JSON serialization ensuring proper handling of French characters and emojis.
- Create tests for language instruction generation based on locale settings.
- Validate UTF-8 handling in workflow tools including AnalyzeTool, CodereviewTool, and DebugIssueTool.
- Ensure that all tests check for correct UTF-8 character preservation and proper JSON formatting.
- Add integration tests to verify the interaction between locale settings and model responses.
* Fix model metadata preservation when using continuation_id
When continuing a conversation without specifying a model, the system now
correctly retrieves and uses the model from the previous assistant turn
instead of defaulting to DEFAULT_MODEL. This ensures model continuity
across conversation turns and fixes the metadata mismatch issue.
The fix:
- In reconstruct_thread_context(), check for previous assistant turns
- If no model is specified in the continuation request, use the model
from the most recent assistant turn
- This preserves the model choice across conversation continuations
Added comprehensive tests to verify the fix handles:
- Single turn conversations
- Multiple turns with different models
- No previous assistant turns (falls back to DEFAULT_MODEL)
- Explicit model specification (overrides previous turn)
- Thread chain relationships
Fixes issue where continuation metadata would incorrectly report
'llama3.2' instead of the actual model used (e.g., 'deepseek-r1-8b')
* Update test to reference issue #111
* Refactor tests to call reconstruct_thread_context directly
Address Gemini Code Assist feedback by removing duplicated implementation
logic from tests. Tests now call the actual function with proper mocking
instead of reimplementing the model retrieval logic.
This improves maintainability and ensures tests validate actual behavior
rather than their own copy of the logic.