Adds support for OpenAI's GPT-5-Codex model which uses the new Responses API
endpoint (/v1/responses) instead of the standard Chat Completions API.
Changes:
- Add GPT-5-Codex to MODEL_CAPABILITIES with 400K context, 128K output
- Prioritize GPT-5-Codex for EXTENDED_REASONING tasks
- Add aliases: codex, gpt5-codex, gpt-5-code
- Update tests to expect GPT-5-Codex for extended reasoning
Benefits:
- 40-80% cost savings through Responses API caching
- 3% better performance on coding tasks (SWE-bench)
- Leverages existing dual-API infrastructure
fix: model definition re-introduced into the schema but intelligently and only a summary is generated per tool. Required to ensure CLI calls and uses the correct model
fix: removed `model` param from some tools where this wasn't needed
fix: fixed adherence to `*_ALLOWED_MODELS` by advertising only the allowed models to the CLI
fix: removed duplicates across providers when passing canonical names back to the CLI; the first enabled provider wins
docs: document provider base class
refactor: cleanup custom provider, it should only deal with `is_custom` model configurations
fix: make sure openrouter provider does not load `is_custom` models
fix: listmodels tool cleanup
Fixed issue where OpenAI models appeared twice in listmodels output by:
- Removing self-referencing aliases from OpenAI model definitions (e.g., "gpt-5" no longer includes "gpt-5" in its aliases)
- Adding filter in listmodels.py to skip aliases that match the model name
- Cleaned up inconsistent alias naming (o3-pro -> o3pro)
This ensures each model appears only once in the listing while preserving all useful aliases.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Extract restriction checking logic into reusable helper method
- Refactor validate_model_name to reduce code duplication
- Fix logging import by using existing module-level logger
- Clean up test file by removing print statement and main block
- All tests continue to pass after refactoring
- OpenAI provider now checks custom models registry for user configurations
- Custom models with supports_temperature=false no longer send temperature to API
- Fixes 400 errors for custom o3/gpt-5 models configured without temperature support
- Added comprehensive tests to verify the fix works correctly
- Maintains backward compatibility with built-in models
Fixes#245
- Fix ModelContext constructor call in consensus tool (remove invalid parameters)
- Refactor temperature pattern matching for better readability per code review
- All tests now passing (799/799 passed)
- Fix consensus tool hardcoded temperature=0.2 bypassing model capabilities
- Add intelligent temperature inference for unknown custom models
- Support multi-model collaboration (O3, Gemini, Claude, Mistral, DeepSeek)
- Only OpenAI O-series and DeepSeek reasoner models reject temperature
- Most reasoning models (Gemini Pro, Claude, Mistral) DO support temperature
- Comprehensive logging for temperature decisions and user guidance
Resolves: https://github.com/BeehiveInnovations/zen-mcp-server/issues/245
- Add GEMINI_BASE_URL configuration option in .env.example
- Implement custom endpoint support in GeminiModelProvider using HttpOptions
- Update registry to pass base_url parameter to Gemini provider
- Maintain backward compatibility - uses default Google endpoint when not configured
This commit updates all references to Claude Opus 4 and Sonnet 4 to their newer 4.1 versions throughout the codebase.
The changes include:
- Updating model names in `conf/custom_models.json` and `providers/dial.py`.
- Updating aliases and descriptions to match the new model versions.
- Updating `.env.example` to reflect the new model names.
- Updating all relevant test suites to use the new model names and ensure all tests pass.
- Remove redundant path checks between Path("conf/custom_models.json") and Path.cwd() variants
- Implement proper importlib.resources.files('conf') approach for robust packaging
- Create conf/__init__.py to make conf a proper Python package
- Update pyproject.toml to include conf* in package discovery
- Clean up verbose comments and simplify resource loading logic
- Fix test mocking to use correct importlib.resources.files target
- All tests passing (8/8) with proper resource and fallback functionality
Addresses all gemini-code-assist bot feedback from PR #227
Improvements based on gemini-code-assist bot feedback:
1. **Proper importlib.resources implementation:**
- Use files("providers") / "../conf/custom_models.json" for resource loading
- Prioritize resource loading over file system paths for packaged environments
- Maintain backward compatibility with explicit config paths and env variables
2. **Remove redundant path checks:**
- Eliminated duplicate Path("conf/custom_models.json") and Path.cwd() / "conf/custom_models.json"
- Streamlined fallback logic to development path + working directory only
3. **Enhanced test coverage:**
- Mock-based testing of actual fallback scenarios with Path.exists
- Proper resource loading simulation and failure testing
- Comprehensive coverage of both resource and file system modes
4. **Robust error handling:**
- Graceful fallback from resources to file system when resource loading fails
- Clear logging of which loading method is being used
- Better error messages indicating resource vs file system loading
The implementation now follows Python packaging best practices using importlib.resources
while maintaining full backward compatibility and robust fallback behavior.
Tested: All 8 test cases pass, resource loading works in development,
file system fallback works when resources fail.
Resolves issues #203, #186, #206, #185 where OpenRouter model registry
completely failed to load in uvx installations due to inaccessible
conf/custom_models.json file.
Changes:
- Implement multiple path resolution strategy in OpenRouterModelRegistry
- Development: Path(__file__).parent.parent / "conf" / "custom_models.json"
- UVX working dir: Path("conf/custom_models.json")
- Current working dir: Path.cwd() / "conf" / "custom_models.json"
- Add importlib-resources fallback for Python < 3.9 compatibility
- Add comprehensive test suite for path resolution scenarios
- Ensure graceful handling when config files are missing
The fix restores full OpenRouter functionality (15 models, 62+ aliases)
for users installing via uvx while maintaining backward compatibility
for development and explicit config scenarios.
Tested: All path resolution scenarios pass, OpenRouter models load correctly
- Use o3-pro throughout the codebase instead of o3-pro-2025-06-10
- Update test expectations to match o3-pro model name
- Update cassette to use o3-pro for consistency
- Ensure responses endpoint routing works correctly with o3-pro
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Use new output_text field format for o3-pro responses
- Update test expectations to use resolved model name o3-pro-2025-06-10
- Keep HTTP transport recorder and PII sanitization improvements
- Preserve both bug fix and recent GPT-5 updates
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Improvements to model name resolution
Improved instructions for multi-step workflows when continuation is available
Improved instructions for chat tool
Improved preferred model resolution, moved code from registry -> each provider
Updated tests