Commit Graph

112 Commits

Author SHA1 Message Date
Fahad
7a6fa0e77a refactor: generic name for the CLI agent 2025-10-03 10:55:10 +04:00
Fahad
6cab9e56fc feat: added intelligence_score to the model capabilities schema; a 1-20 number that can be specified to influence the sort order of models presented to the CLI in auto selection mode
fix: model definition re-introduced into the schema but intelligently and only a summary is generated per tool. Required to ensure CLI calls and uses the correct model
fix: removed `model` param from some tools where this wasn't needed
fix: fixed adherence to `*_ALLOWED_MODELS` by advertising only the allowed models to the CLI
fix: removed duplicates across providers when passing canonical names back to the CLI; the first enabled provider wins
2025-10-02 21:43:44 +04:00
Fahad
d285fadf4c fix: custom provider must only accept a model if it's declared explicitly. Upon model rejection (in auto mode) the list of available models is returned up-front to help with selection. 2025-10-02 13:49:23 +04:00
Fahad
82a03ce63f fix: baseclass should return MODEL_CAPABILITIES 2025-10-02 13:04:33 +04:00
Fahad
693b84db2b refactor: cleanup provider base class; cleanup shared responsibilities; cleanup public contract
docs: document provider base class
refactor: cleanup custom provider, it should only deal with `is_custom` model configurations
fix: make sure openrouter provider does not load `is_custom` models
fix: listmodels tool cleanup
2025-10-02 12:59:45 +04:00
Fahad
6ec2033f34 refactor: cleanup 2025-10-02 11:47:09 +04:00
Fahad
7fe9fc49f8 refactor: cleanup token counting 2025-10-02 11:35:29 +04:00
Fahad
14a35afa1d refactor: moved image related code out of base provider into a separate utility 2025-10-02 11:23:15 +04:00
Fahad
a254ff2220 refactor: removed method from provider, should use model capabilities instead
refactor: cleanup temperature factory method
2025-10-02 11:08:56 +04:00
Fahad
9c11ecc4bf refactor: clean temperature inference 2025-10-02 10:41:05 +04:00
Fahad
6d237d0970 refactor: moved temperature method from base provider to model capabilities
refactor: model listing cleanup, moved logic to model_capabilities.py
docs: added AGENTS.md for onboarding Codex
2025-10-02 10:25:41 +04:00
Fahad
f461cb4519 refactor: moved temperature method from base provider to model capabilities
docs: added AGENTS.md for onboarding Codex
2025-10-02 09:17:36 +04:00
Fahad
1dc25f6c3d refactor: renaming to reflect underlying type
docs: updated to reflect new modules
2025-10-02 09:07:40 +04:00
Fahad
2b10adcaf2 refactor: removed hook from base class, turned into helper static method instead 2025-10-02 08:45:59 +04:00
Fahad
250545e34f refactor: removed hard coded checks, use model capabilities instead 2025-10-02 08:32:51 +04:00
Fahad
182aa627df refactor: code cleanup 2025-10-02 08:09:44 +04:00
Beehive Innovations
48885e7a5b Merge pull request #250 from DragonFSKY/feat/custom-gemini-endpoint
feat: add custom Gemini endpoint support
2025-10-02 06:01:38 +04:00
Fahad
bf9344963f Merge branch 'pr-247-modified' 2025-10-01 19:51:29 +04:00
Fahad
acef7da93c Cleanup 2025-10-01 19:51:24 +04:00
Fahad
484d78d96b Cleanup 2025-10-01 19:46:43 +04:00
Beehive Innovations
f51da6e5f8 Merge branch 'main' into main 2025-10-01 18:19:02 +04:00
Devon Hillard
c29e7623ac fix: Remove duplicate OpenAI models from listmodels output
Fixed issue where OpenAI models appeared twice in listmodels output by:
- Removing self-referencing aliases from OpenAI model definitions (e.g., "gpt-5" no longer includes "gpt-5" in its aliases)
- Adding filter in listmodels.py to skip aliases that match the model name
- Cleaned up inconsistent alias naming (o3-pro -> o3pro)

This ensures each model appears only once in the listing while preserving all useful aliases.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-09 19:00:43 -06:00
Sven Lito
525f4598ce refactor: address code review feedback from Gemini
- Extract restriction checking logic into reusable helper method
- Refactor validate_model_name to reduce code duplication
- Fix logging import by using existing module-level logger
- Clean up test file by removing print statement and main block
- All tests continue to pass after refactoring
2025-09-05 11:04:45 +07:00
Sven Lito
2db1323813 fix: respect custom OpenAI model temperature settings (#245)
- OpenAI provider now checks custom models registry for user configurations
- Custom models with supports_temperature=false no longer send temperature to API
- Fixes 400 errors for custom o3/gpt-5 models configured without temperature support
- Added comprehensive tests to verify the fix works correctly
- Maintains backward compatibility with built-in models

Fixes #245
2025-09-05 10:53:28 +07:00
Beehive Innovations
472c13bb2e Merge pull request #253 from svnlto/fix/consensus-temperature-handling
fix: resolve temperature handling issues for O3/custom models (#245)
2025-08-24 21:29:14 +04:00
Fahad
a07036e680 fix: another fix for https://github.com/BeehiveInnovations/zen-mcp-server/issues/251 2025-08-24 21:25:01 +04:00
Sven Lito
6bd9d6709a fix: address test failures and PR feedback
- Fix ModelContext constructor call in consensus tool (remove invalid parameters)
- Refactor temperature pattern matching for better readability per code review
- All tests now passing (799/799 passed)
2025-08-23 18:50:49 +07:00
Sven Lito
3b4fd88d7e fix: resolve temperature handling issues for O3/custom models (#245)
- Fix consensus tool hardcoded temperature=0.2 bypassing model capabilities
- Add intelligent temperature inference for unknown custom models
- Support multi-model collaboration (O3, Gemini, Claude, Mistral, DeepSeek)
- Only OpenAI O-series and DeepSeek reasoner models reject temperature
- Most reasoning models (Gemini Pro, Claude, Mistral) DO support temperature
- Comprehensive logging for temperature decisions and user guidance

Resolves: https://github.com/BeehiveInnovations/zen-mcp-server/issues/245
2025-08-23 18:43:51 +07:00
Fahad
f89afd1a72 fix: https://github.com/BeehiveInnovations/zen-mcp-server/issues/251 added handling for safety_feedback from Gemini. FinishReason.STOP can be a hidden safety block from gemini or issued when it chooses not to respond. 2025-08-23 14:03:46 +04:00
dragonfsky
33ea896c51 style: apply Black formatting to use double quotes
Fix Black linting error in CI checks
2025-08-22 18:40:25 +08:00
dragonfsky
023940be3e refactor: simplify Gemini provider initialization using kwargs dict
As suggested by code review, this reduces code duplication and improves maintainability
2025-08-22 18:37:11 +08:00
dragonfsky
956e8a6927 fix: use types.HttpOptions from module imports instead of local import
Addresses linting issue raised by CI checks
2025-08-22 18:34:39 +08:00
dragonfsky
462bce002e feat: add custom Gemini endpoint support
- Add GEMINI_BASE_URL configuration option in .env.example
- Implement custom endpoint support in GeminiModelProvider using HttpOptions
- Update registry to pass base_url parameter to Gemini provider
- Maintain backward compatibility - uses default Google endpoint when not configured
2025-08-22 18:01:00 +08:00
David Knedlik
4930824052 feat: Add comprehensive GPT-5 series model support
- Add GPT-5, GPT-5-mini, and GPT-5-nano models to unified configuration
- Implement proper thinking mode support via dynamic capability checking
- Add OpenAI provider model enumeration methods for registry integration
- Update tests to cover all GPT-5 models and their aliases
- Fix critical bug where thinking mode was hardcoded instead of using model capabilities

Breaking Changes:
- None (backward compatible)

New Models Available:
- gpt-5 (400K context, 128K output, reasoning support)
- gpt-5-mini (400K context, 128K output, efficient variant)
- gpt-5-nano (400K context, fastest/cheapest variant)

Aliases:
- gpt5, gpt5-mini, gpt5mini, gpt5-nano, gpt5nano, nano

All models support:
- Extended thinking mode (reasoning tokens)
- Vision capabilities
- JSON mode
- Function calling

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-21 14:27:00 -05:00
google-labs-jules[bot]
0959d6f0fa feat: Update Claude models to Opus 4.1 and Sonnet 4.1
This commit updates all references to Claude Opus 4 and Sonnet 4 to their newer 4.1 versions throughout the codebase.

The changes include:
- Updating model names in `conf/custom_models.json` and `providers/dial.py`.
- Updating aliases and descriptions to match the new model versions.
- Updating `.env.example` to reflect the new model names.
- Updating all relevant test suites to use the new model names and ensure all tests pass.
2025-08-17 16:08:52 +00:00
Sven Lito
db07fa4651 Apply Black formatting to fix CI formatting check 2025-08-10 22:21:16 +07:00
Sven Lito
5e599b9e7d Complete PR review feedback implementation with clean importlib.resources approach
- Remove redundant path checks between Path("conf/custom_models.json") and Path.cwd() variants
- Implement proper importlib.resources.files('conf') approach for robust packaging
- Create conf/__init__.py to make conf a proper Python package
- Update pyproject.toml to include conf* in package discovery
- Clean up verbose comments and simplify resource loading logic
- Fix test mocking to use correct importlib.resources.files target
- All tests passing (8/8) with proper resource and fallback functionality

Addresses all gemini-code-assist bot feedback from PR #227
2025-08-10 22:13:25 +07:00
Sven Lito
84de9b026f Address PR review feedback: Implement proper importlib.resources approach
Improvements based on gemini-code-assist bot feedback:

1. **Proper importlib.resources implementation:**
   - Use files("providers") / "../conf/custom_models.json" for resource loading
   - Prioritize resource loading over file system paths for packaged environments
   - Maintain backward compatibility with explicit config paths and env variables

2. **Remove redundant path checks:**
   - Eliminated duplicate Path("conf/custom_models.json") and Path.cwd() / "conf/custom_models.json"
   - Streamlined fallback logic to development path + working directory only

3. **Enhanced test coverage:**
   - Mock-based testing of actual fallback scenarios with Path.exists
   - Proper resource loading simulation and failure testing
   - Comprehensive coverage of both resource and file system modes

4. **Robust error handling:**
   - Graceful fallback from resources to file system when resource loading fails
   - Clear logging of which loading method is being used
   - Better error messages indicating resource vs file system loading

The implementation now follows Python packaging best practices using importlib.resources
while maintaining full backward compatibility and robust fallback behavior.

Tested: All 8 test cases pass, resource loading works in development,
file system fallback works when resources fail.
2025-08-10 21:36:40 +07:00
Sven Lito
5565f59a1c Fix uvx resource packaging issues for OpenRouter functionality
Resolves issues #203, #186, #206, #185 where OpenRouter model registry
completely failed to load in uvx installations due to inaccessible
conf/custom_models.json file.

Changes:
- Implement multiple path resolution strategy in OpenRouterModelRegistry
  - Development: Path(__file__).parent.parent / "conf" / "custom_models.json"
  - UVX working dir: Path("conf/custom_models.json")
  - Current working dir: Path.cwd() / "conf" / "custom_models.json"
- Add importlib-resources fallback for Python < 3.9 compatibility
- Add comprehensive test suite for path resolution scenarios
- Ensure graceful handling when config files are missing

The fix restores full OpenRouter functionality (15 models, 62+ aliases)
for users installing via uvx while maintaining backward compatibility
for development and explicit config scenarios.

Tested: All path resolution scenarios pass, OpenRouter models load correctly
2025-08-10 21:27:48 +07:00
Fahad
f9dd55cfd3 Fix linting issues: add missing base64 import and remove unused import
- Add missing base64 import in providers/base.py
- Remove unused base64 import from providers/openai_compatible.py
- All tests now pass (19/19 image validation tests)
- Code quality checks pass 100%
2025-08-08 11:17:50 +05:00
Beehive Innovations
c993942efc Update base.py
removed unused import
2025-08-08 10:13:19 +04:00
Beehive Innovations
f7a079bc35 Merge branch 'main' into refactor-image-validation 2025-08-07 23:12:00 -07:00
Fahad
fcb0fe3ef2 Fix o3-pro model resolution to use o3-pro consistently
- Use o3-pro throughout the codebase instead of o3-pro-2025-06-10
- Update test expectations to match o3-pro model name
- Update cassette to use o3-pro for consistency
- Ensure responses endpoint routing works correctly with o3-pro

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-08 10:52:23 +05:00
Fahad
2fdc8fad72 Resolve merge conflicts in o3-pro response parsing fix
- Use new output_text field format for o3-pro responses
- Update test expectations to use resolved model name o3-pro-2025-06-10
- Keep HTTP transport recorder and PII sanitization improvements
- Preserve both bug fix and recent GPT-5 updates

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-08 10:48:56 +05:00
Fahad
5ef13d1628 GPT 5 nano 2025-08-08 10:43:25 +05:00
Fahad
7f37efcbfe Grok-4 support 2025-08-08 09:39:07 +05:00
Fahad
1a8ec2e12f GPT-5, GPT-5-mini support
Improvements to model name resolution
Improved instructions for multi-step workflows when continuation is available
Improved instructions for chat tool
Improved preferred model resolution, moved code from registry -> each provider
Updated tests
2025-08-08 08:51:34 +05:00
Josh Vera
f00a5eaa36 docs: Update VCR testing documentation and fix PEP 8 import order
- Update docs/vcr-testing.md with new PII sanitization features
- Document transport_helpers.inject_transport() for simpler test setup
- Add sanitize_cassettes.py script documentation
- Update file structure to include all new components
- Fix PEP 8: Move copy import to top of openai_compatible.py
- Enhance security notes about automatic sanitization

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-13 10:56:59 -06:00
Josh Vera
91605bbd98 feat: Implement code review improvements from gemini-2.5-pro analysis
 Key improvements:
• Added public reset_for_testing() method to registry for clean test state management
• Updated test setup/teardown to use new public API instead of private attributes
• Enhanced inject_transport helper to ensure OpenAI provider registration
• Migrated additional test files to use inject_transport pattern
• Reduced code duplication by ~30 lines across test files

🔧 Technical details:
• transport_helpers.py: Always register OpenAI provider for transport tests
• test_o3_pro_output_text_fix.py: Use reset_for_testing() API, remove redundant registration
• test_o3_pro_fixture_bisect.py: Migrate all 4 test methods to inject_transport
• test_o3_pro_simplified.py: Migrate both test methods to inject_transport
• providers/registry.py: Add reset_for_testing() public method

 Quality assurance:
• All 7 o3-pro tests pass with new helper pattern
• No regression in test isolation or provider state management
• Improved maintainability through centralized transport injection
• Follows single responsibility principle with focused helper function

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-13 09:53:49 -06:00
Nate Parsons
48cff76c99 Address PR #192 review comments
- Fix TOCTOU race condition by removing os.path.exists() check before file open
- Move imports (base64, binascii, os, utils.file_types) to top of file
- Replace broad Exception catch with specific binascii.Error for base64 decoding
- Maintain proper error handling and test compatibility
2025-07-12 22:13:03 -07:00