The previous fix (aceddb6) removed --search entirely, disabling web search.
This restores web search functionality using the correct --enable flag
that works with the codex exec subcommand.
Related to #338
- Add anthropic/claude-opus-4.5 with aliases: opus, opus4.5, claude-opus
- Set intelligence_score to 18 (matching Gemini 3 Pro)
- Update Opus 4.1 to use opus4.1 alias only
- Update tests to reflect new alias mappings
Note: supports_function_calling and supports_json_mode set to false
following existing project pattern for Claude models, despite
OpenRouter API support for these features.
Stay in codex, plan review and fix complicated bugs, then ask it to spawn claude code and implement the plan.
This uses your current subscription instead of API tokens.
model definitions now support a new `allow_code_generation` flag, only to be used with higher reasoning models such as GPT-5-Pro and-Gemini 2.5-Pro
When `true`, the `chat` tool can now request the external model to generate a full implementation / update / instructions etc and then share the implementation with the calling agent.
This effectively allows us to utilize more powerful models such as GPT-5-Pro to generate code for us or entire implementations (which are either API-only or part of the $200 Pro plan from within the ChatGPT app)
Adds configuration for the new GPT-5 Pro model released by OpenAI.
Specifications from official docs:
- 400K context window
- 272K max output tokens (largest of GPT-5 family)
- Highest reasoning capability (5/5)
- Reasoning token support
- Vision input support (text+image input, text output only)
- No temperature support (fixed, like other reasoning models)
- Available via Responses API only (use_openai_response_api: true)
- Default reasoning effort: high
Accessible via alias 'gpt5pro'.
Zen now allows you to define `roles` for an external CLI and delegate work to another CLI via the new `clink` tool (short for `CLI + Link`). Gemini, for instance, offers 1000 free requests a day - this means you can save on tokens and your weekly limits within Claude Code by delegating work to another entirely capable CLI agent!
Define your own system prompts as `roles` and make another CLI do anything you'd like. Like the current tool you're connected to, the other CLI has complete access to your files and the current context. This also works incredibly well with Zen's `conversation continuity`.
feat: Azure OpenAI / Azure AI Foundry support. Models should be defined in conf/azure_models.json (or a custom path). See .env.example for environment variables or see readme. https://github.com/BeehiveInnovations/zen-mcp-server/issues/265
feat: OpenRouter / Custom Models / Azure can separately also use custom config paths now (see .env.example )
refactor: Model registry class made abstract, OpenRouter / Custom Provider / Azure OpenAI now subclass these
refactor: breaking change: `is_custom` property has been removed from model_capabilities.py (and thus custom_models.json) given each models are now read from separate configuration files
fix: model definition re-introduced into the schema but intelligently and only a summary is generated per tool. Required to ensure CLI calls and uses the correct model
fix: removed `model` param from some tools where this wasn't needed
fix: fixed adherence to `*_ALLOWED_MODELS` by advertising only the allowed models to the CLI
fix: removed duplicates across providers when passing canonical names back to the CLI; the first enabled provider wins
- Added anthropic/claude-sonnet-4.5 with sonnet and sonnet4.5 aliases
- Updated sonnet alias to point to newest Sonnet 4.5 (was 4.1)
- Kept sonnet4.1 alias for backwards compatibility with Claude Sonnet 4.1
- All tests updated and passing 100%
Adding max token for consistency per review comment
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
This commit updates all references to Claude Opus 4 and Sonnet 4 to their newer 4.1 versions throughout the codebase.
The changes include:
- Updating model names in `conf/custom_models.json` and `providers/dial.py`.
- Updating aliases and descriptions to match the new model versions.
- Updating `.env.example` to reflect the new model names.
- Updating all relevant test suites to use the new model names and ensure all tests pass.
- Remove redundant path checks between Path("conf/custom_models.json") and Path.cwd() variants
- Implement proper importlib.resources.files('conf') approach for robust packaging
- Create conf/__init__.py to make conf a proper Python package
- Update pyproject.toml to include conf* in package discovery
- Clean up verbose comments and simplify resource loading logic
- Fix test mocking to use correct importlib.resources.files target
- All tests passing (8/8) with proper resource and fallback functionality
Addresses all gemini-code-assist bot feedback from PR #227
* feat: Update Claude model references from v3 to v4
- Update model configurations from claude-3-opus to claude-4-opus
- Update model configurations from claude-3-sonnet to claude-4-sonnet
- Maintain backward compatibility through existing aliases (opus, sonnet, claude)
- Update provider registry preferred models list
- Update all test cases and assertions to reflect new model names
- Update documentation and examples consistently across all files
- Add Claude 4 model support while preserving existing functionality
Files modified: 15 (config, docs, providers, tests, tools)
Pattern: Systematic claude-3-* → claude-4-* model reference migration
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* PR feedback: changed anthropic/claude-4-opus -> anthropic/claude-opus-4 and anthropic/claude-4-haiku -> anthropic/claude-3.5-haiku
* changed anthropic/claude-4-sonnet -> anthropic/claude-sonnet-4
* PR feedback removed specific model from test mock
* PR feedback removed base.py
---------
Co-authored-by: Omry Nachman <omry@wix.com>
Co-authored-by: Claude <noreply@anthropic.com>
Fix for: https://github.com/BeehiveInnovations/zen-mcp-server/issues/102
- Removed centralized MODEL_CAPABILITIES_DESC from config.py
- Added model descriptions to individual provider SUPPORTED_MODELS
- Updated _get_available_models() to use ModelProviderRegistry for API key filtering
- Added comprehensive test suite validating bug reproduction and fix
- Added o3-pro model configuration to custom_models.json with 200K context
- Updated OpenAI provider to support o3-pro with fixed temperature constraint
- Extended simulator tests to include o3-pro validation scenarios
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Further fixes to tests
Pass O3 simulation test when keys are not set, along with a notice
Updated docs on testing, simulation tests / contributing
Support for OpenAI o4-mini and o4-mini-high