- Fix consensus tool hardcoded temperature=0.2 bypassing model capabilities
- Add intelligent temperature inference for unknown custom models
- Support multi-model collaboration (O3, Gemini, Claude, Mistral, DeepSeek)
- Only OpenAI O-series and DeepSeek reasoner models reject temperature
- Most reasoning models (Gemini Pro, Claude, Mistral) DO support temperature
- Comprehensive logging for temperature decisions and user guidance
Resolves: https://github.com/BeehiveInnovations/zen-mcp-server/issues/245
- Add GEMINI_BASE_URL configuration option in .env.example
- Implement custom endpoint support in GeminiModelProvider using HttpOptions
- Update registry to pass base_url parameter to Gemini provider
- Maintain backward compatibility - uses default Google endpoint when not configured
Disabled secondary tools by default (for new installations), updated README.md with instructions on how to enable these in .env
run-server.sh now displays disabled / enabled tools (when DISABLED_TOOLS is set)
Adding max token for consistency per review comment
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
fix: Minor tweaks to prompts
fix: Improved support for smaller models that struggle with strict structured JSON output
Rearranged reasons to use the MCP above quick start (collapsed)
- Set version to 5.8.6 in pyproject.toml to match config.py
- Add automatic version sync script for GitHub Actions
- Configure semantic-release for proper version tracking
- Ensure __updated__ field auto-updates with each release
This commit updates all references to Claude Opus 4 and Sonnet 4 to their newer 4.1 versions throughout the codebase.
The changes include:
- Updating model names in `conf/custom_models.json` and `providers/dial.py`.
- Updating aliases and descriptions to match the new model versions.
- Updating `.env.example` to reflect the new model names.
- Updating all relevant test suites to use the new model names and ensure all tests pass.
The PR template was outdated and misaligned with the actual workflow behavior
introduced in PR #217. Key fixes:
- **Semantic Release**: Now matches pyproject.toml configuration
- feat → MINOR, fix/perf → PATCH (not refactor)
- Added missing 'build' type from allowed_tags
- Fixed breaking change syntax (feat\!, BREAKING CHANGE: in body)
- Removed incorrect 'breaking:' prefix format
- **Docker Builds**: Clarified independence from versioning
- Builds trigger on file changes (Python, Docker files)
- Manual triggering via 'docker-build' label
- Removed misleading 'trigger Docker build + version bump' claims
- **Conventional Commits**: Added link to official specification
The template now accurately reflects the semantic-release config and
docker-pr.yml workflow implementation, preventing contributor confusion.
The manual version bumping script (scripts/bump_version.py) is now obsolete
since PR #217 introduced semantic-release automation for version management.
- Removed scripts/ directory and bump_version.py script
- Updated .dockerignore to remove reference to deleted script
Semantic versioning is now handled automatically by GitHub Actions workflows
using conventional commits and semantic-release tooling.
README improvements:
- Reduce README from 725 to 169 lines (77% reduction)
- Focus on quick start and essential information
- Link to detailed docs instead of duplicating content
- Improve scannability with clear sections and emojis
- Add concise tool categorization and workflows
Documentation structure:
- Create comprehensive getting-started.md guide
- Move detailed setup instructions from README
- Include troubleshooting, configuration templates
- Add step-by-step installation for all methods
Benefits:
- Faster onboarding for new users
- Progressive disclosure of information
- Better GitHub discovery experience
- Maintainable documentation structure
- Clear separation of concerns
The README now serves as an effective landing page while the
detailed documentation provides comprehensive guidance.
- Add missing models to all tool parameter documentation
- Update model table in advanced-usage.md with GPT-5 series
- Add Gemini 2.0 Flash and Flash Lite models
- Include detailed capabilities for each model variant
- Fix model parameter consistency across all tool docs
Models added:
- GPT-5 (gpt5): Advanced reasoning with 400K context
- GPT-5 Mini (gpt5-mini): Efficient variant
- GPT-5 Nano (gpt5-nano): Fast, low-cost variant
- Gemini 2.0 Flash (flash-2.0): Audio/video support
- Gemini 2.0 Flash Lite (flashlite): Text-only lightweight