Commit Graph

1032 Commits

Author SHA1 Message Date
Devon Hillard
c29e7623ac fix: Remove duplicate OpenAI models from listmodels output
Fixed issue where OpenAI models appeared twice in listmodels output by:
- Removing self-referencing aliases from OpenAI model definitions (e.g., "gpt-5" no longer includes "gpt-5" in its aliases)
- Adding filter in listmodels.py to skip aliases that match the model name
- Cleaned up inconsistent alias naming (o3-pro -> o3pro)

This ensures each model appears only once in the listing while preserving all useful aliases.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-09 19:00:43 -06:00
Sven Lito
525f4598ce refactor: address code review feedback from Gemini
- Extract restriction checking logic into reusable helper method
- Refactor validate_model_name to reduce code duplication
- Fix logging import by using existing module-level logger
- Clean up test file by removing print statement and main block
- All tests continue to pass after refactoring
2025-09-05 11:04:45 +07:00
github-actions[bot]
fab1f24475 chore: sync version to config.py [skip ci] 2025-09-05 03:54:23 +00:00
Sven Lito
2db1323813 fix: respect custom OpenAI model temperature settings (#245)
- OpenAI provider now checks custom models registry for user configurations
- Custom models with supports_temperature=false no longer send temperature to API
- Fixes 400 errors for custom o3/gpt-5 models configured without temperature support
- Added comprehensive tests to verify the fix works correctly
- Maintains backward compatibility with built-in models

Fixes #245
2025-09-05 10:53:28 +07:00
谢栋梁
0760b31f8a style: fix trailing whitespace in consensus.py
Remove trailing whitespace to pass CI formatting checks
2025-09-03 11:10:02 +08:00
谢栋梁
30a8952fbc refactor: optimize ModelContext creation in consensus tool
Address code review feedback by creating ModelContext instance once
at the beginning of _consult_model method instead of creating it twice.

- Move ModelContext import to method beginning for better practice
- Create single ModelContext instance and reuse for both file processing
  and temperature validation
- Remove redundant ModelContext creation on line 558
- Improve code clarity and efficiency as suggested by code review
2025-09-03 11:03:04 +08:00
谢栋梁
9044b63809 fix: resolve consensus tool model_context parameter missing issue
Fixed runtime bug where _prepare_file_content_for_prompt was called
without required model_context parameter, causing RuntimeError when
processing requests with relevant_files.

- Create ModelContext instance with model_name in _consult_model method
- Pass model_context parameter to _prepare_file_content_for_prompt call
- Add comprehensive regression test to prevent future occurrences
- Maintain consensus tool's blinded design with independent model contexts
2025-09-03 10:55:22 +08:00
谢栋梁
4493a69333 style: fix ruff import sorting issue
Sort dotenv imports alphabetically to comply with ruff I001 rule
2025-09-02 08:58:21 +08:00
谢栋梁
d34c299f02 fix: resolve logging timing and import organization issues
- Move dotenv_values import to top level with load_dotenv
- Fix logging sequence issue by deferring ZEN_MCP_FORCE_ENV_OVERRIDE logs until after logger configuration
- Apply Black formatting to ensure consistent code style
2025-09-02 08:55:25 +08:00
谢栋梁
93ce6987b6 feat: add configurable environment variable override system
Add ZEN_MCP_FORCE_ENV_OVERRIDE configuration to control whether .env file values
override system environment variables. This prevents conflicts when multiple AI
tools pass different cached environment variables to the MCP server.

- Use dotenv_values() to read configuration from .env file only
- Apply conditional override based on configuration setting
- Add appropriate logging for transparency
- Update .env.example with detailed configuration documentation
- Maintains backward compatibility with default behavior (false)
2025-09-02 08:35:06 +08:00
github-actions[bot]
12090646ee chore: sync version to config.py [skip ci] 2025-08-26 07:09:08 +00:00
semantic-release
8749b4c6a8 chore(release): 5.11.0
Automatically generated by python-semantic-release
2025-08-26 07:09:02 +00:00
Fahad
ce56d16240 feat: Codex CLI support
docs: Update instructions to discover uvx automatically, may not be installed system wide
2025-08-26 11:08:16 +04:00
github-actions[bot]
973546990f chore: sync version to config.py [skip ci] 2025-08-24 17:29:59 +00:00
semantic-release
2c74f1e3c6 chore(release): 5.10.3
Automatically generated by python-semantic-release
2025-08-24 17:29:54 +00:00
Beehive Innovations
472c13bb2e Merge pull request #253 from svnlto/fix/consensus-temperature-handling
fix: resolve temperature handling issues for O3/custom models (#245)
2025-08-24 21:29:14 +04:00
github-actions[bot]
d6e6808be5 chore: sync version to config.py [skip ci] 2025-08-24 17:25:59 +00:00
semantic-release
f3dbe06fea chore(release): 5.10.2
Automatically generated by python-semantic-release
2025-08-24 17:25:54 +00:00
Fahad
a07036e680 fix: another fix for https://github.com/BeehiveInnovations/zen-mcp-server/issues/251 2025-08-24 21:25:01 +04:00
Sven Lito
6bd9d6709a fix: address test failures and PR feedback
- Fix ModelContext constructor call in consensus tool (remove invalid parameters)
- Refactor temperature pattern matching for better readability per code review
- All tests now passing (799/799 passed)
2025-08-23 18:50:49 +07:00
Sven Lito
3b4fd88d7e fix: resolve temperature handling issues for O3/custom models (#245)
- Fix consensus tool hardcoded temperature=0.2 bypassing model capabilities
- Add intelligent temperature inference for unknown custom models
- Support multi-model collaboration (O3, Gemini, Claude, Mistral, DeepSeek)
- Only OpenAI O-series and DeepSeek reasoner models reject temperature
- Most reasoning models (Gemini Pro, Claude, Mistral) DO support temperature
- Comprehensive logging for temperature decisions and user guidance

Resolves: https://github.com/BeehiveInnovations/zen-mcp-server/issues/245
2025-08-23 18:43:51 +07:00
github-actions[bot]
9da5c37809 chore: sync version to config.py [skip ci] 2025-08-23 10:04:26 +00:00
Fahad
f89afd1a72 fix: https://github.com/BeehiveInnovations/zen-mcp-server/issues/251 added handling for safety_feedback from Gemini. FinishReason.STOP can be a hidden safety block from gemini or issued when it chooses not to respond. 2025-08-23 14:03:46 +04:00
dragonfsky
33ea896c51 style: apply Black formatting to use double quotes
Fix Black linting error in CI checks
2025-08-22 18:40:25 +08:00
dragonfsky
023940be3e refactor: simplify Gemini provider initialization using kwargs dict
As suggested by code review, this reduces code duplication and improves maintainability
2025-08-22 18:37:11 +08:00
dragonfsky
956e8a6927 fix: use types.HttpOptions from module imports instead of local import
Addresses linting issue raised by CI checks
2025-08-22 18:34:39 +08:00
dragonfsky
462bce002e feat: add custom Gemini endpoint support
- Add GEMINI_BASE_URL configuration option in .env.example
- Implement custom endpoint support in GeminiModelProvider using HttpOptions
- Update registry to pass base_url parameter to Gemini provider
- Maintain backward compatibility - uses default Google endpoint when not configured
2025-08-22 18:01:00 +08:00
github-actions[bot]
4c87afd479 chore: sync version to config.py [skip ci] 2025-08-22 05:24:43 +00:00
semantic-release
6a362969fd chore(release): 5.10.0
Automatically generated by python-semantic-release
2025-08-22 05:24:39 +00:00
Fahad
4b202f5d1d feat: refactored and tweaked model descriptions / schema to use fewer tokens at launch (average reduction per field description: 60-80%) without sacrificing tool effectiveness
Disabled secondary tools by default (for new installations), updated README.md with instructions on how to enable these in .env
run-server.sh now displays disabled / enabled tools (when DISABLED_TOOLS is set)
2025-08-22 09:23:59 +04:00
Fahad
6921616db3 WIP: tool description / schema updates 2025-08-22 06:53:05 +04:00
dknedlik
09d6ba4eac Update conf/custom_models.json
Adding max token for consistency per review comment

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-21 14:45:28 -05:00
dknedlik
d9b5c77dd8 Update conf/custom_models.json
Updating per code review comments.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-21 14:44:54 -05:00
David Knedlik
4930824052 feat: Add comprehensive GPT-5 series model support
- Add GPT-5, GPT-5-mini, and GPT-5-nano models to unified configuration
- Implement proper thinking mode support via dynamic capability checking
- Add OpenAI provider model enumeration methods for registry integration
- Update tests to cover all GPT-5 models and their aliases
- Fix critical bug where thinking mode was hardcoded instead of using model capabilities

Breaking Changes:
- None (backward compatible)

New Models Available:
- gpt-5 (400K context, 128K output, reasoning support)
- gpt-5-mini (400K context, 128K output, efficient variant)
- gpt-5-nano (400K context, fastest/cheapest variant)

Aliases:
- gpt5, gpt5-mini, gpt5mini, gpt5-nano, gpt5nano, nano

All models support:
- Extended thinking mode (reasoning tokens)
- Vision capabilities
- JSON mode
- Function calling

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-21 14:27:00 -05:00
github-actions[bot]
12542054a2 chore: sync version to config.py [skip ci] 2025-08-21 10:05:20 +00:00
semantic-release
eedd9dd437 chore(release): 5.9.0
Automatically generated by python-semantic-release
2025-08-21 10:05:14 +00:00
Fahad
80d21e57c0 feat: refactored and improved codereview in line with precommit. Reviews are now either external (default) or internal. Takes away anxiety and loss of tokens when Claude incorrectly decides to be 'confident' about its own changes and bungle things up.
fix: Minor tweaks to prompts
fix: Improved support for smaller models that struggle with strict structured JSON output
Rearranged reasons to use the MCP above quick start (collapsed)
2025-08-21 14:04:32 +04:00
Fahad
d30c212029 refactor: minor prompt tweaks 2025-08-21 12:23:13 +04:00
Fahad
90821b51ff docs: update instructions for precommit 2025-08-20 16:34:08 +04:00
semantic-release
1542fd3dac chore(release): 5.8.6
Automatically generated by python-semantic-release
2025-08-20 12:29:41 +00:00
Fahad
1c973afb00 fix: escape backslashes in TOML regex pattern
Fixed TOML decode error in pyproject.toml version_pattern field
2025-08-20 16:28:43 +04:00
Fahad
340b58f2e7 fix: restore proper version 5.8.6
Correcting version from semantic-release auto-update back to proper 5.8.6
2025-08-20 16:26:46 +04:00
Fahad
90a4195381 fix: establish version 5.8.6 and add version sync automation
- Set version to 5.8.6 in pyproject.toml to match config.py
- Add automatic version sync script for GitHub Actions
- Configure semantic-release for proper version tracking
- Ensure __updated__ field auto-updates with each release
2025-08-20 16:26:16 +04:00
github-actions[bot]
4f82f65005 chore: sync version to config.py [skip ci] 2025-08-20 12:23:50 +00:00
semantic-release
284352baa8 chore(release): 1.1.0
Automatically generated by python-semantic-release
2025-08-20 12:23:45 +00:00
Fahad
2966dcf268 feat: improvements to precommit
fix: version
2025-08-20 16:22:52 +04:00
Fahad
77e8ed1a9f Further improvements to precommit to ensure required steps are followed precisely 2025-08-20 16:08:22 +04:00
Fahad
7100d8567e Removed test files 2025-08-20 15:19:11 +04:00
Fahad
57200a8a2e Precommit updated to always perform external analysis (via _other_ model) unless specified not to. This prevents Claude from being overconfident and inadequately performing subpar precommit checks.
Improved precommit continuations to be immediate
Workflow state restoration added between stateless calls
Fixed incorrect token limit check
2025-08-20 15:19:01 +04:00
Fahad
0af9202012 Precommit updated to take always prefer external analysis (via _other_ model) unless specified not to. This prevents Claude from being overconfident and inadequately performing subpar precommit checks. 2025-08-20 11:55:40 +04:00