12 Commits

Author SHA1 Message Date
c71a535f16 added opencode zen as provider 2025-12-25 11:08:23 +01:00
Fahad
ece8a5ebed feat!: Full code can now be generated by an external model and shared with the AI tool (Claude Code / Codex etc)!
model definitions now support a new `allow_code_generation` flag, only to be used with higher reasoning models such as GPT-5-Pro and-Gemini 2.5-Pro

 When `true`, the `chat` tool can now request the external model to generate a full implementation / update / instructions etc and then share the implementation with the calling agent.

 This effectively allows us to utilize more powerful models such as GPT-5-Pro to generate code for us or entire implementations (which are either API-only or part of the $200 Pro plan from within the ChatGPT app)
2025-10-07 18:49:13 +04:00
Fahad
a65485a1e5 feat: support for GPT-5-Pro highest reasoning model https://github.com/BeehiveInnovations/zen-mcp-server/issues/275 2025-10-06 22:36:44 +04:00
Fahad
3824d13161 feat: OpenAI/compatible models (such as Azure OpenAI) can declare if they use the response API instead via use_openai_responses_api 2025-10-04 21:20:47 +04:00
Fahad
ff9a07a37a feat!: breaking change - OpenRouter models are now read from conf/openrouter_models.json while Custom / Self-hosted models are read from conf/custom_models.json
feat: Azure OpenAI / Azure AI Foundry support. Models should be defined in conf/azure_models.json (or a custom path). See .env.example for environment variables or see readme. https://github.com/BeehiveInnovations/zen-mcp-server/issues/265

feat: OpenRouter / Custom Models / Azure can separately also use custom config paths now (see .env.example )

refactor: Model registry class made abstract, OpenRouter / Custom Provider / Azure OpenAI now subclass these

refactor: breaking change: `is_custom` property has been removed from model_capabilities.py (and thus custom_models.json) given each models are now read from separate configuration files
2025-10-04 21:10:56 +04:00
Fahad
6cab9e56fc feat: added intelligence_score to the model capabilities schema; a 1-20 number that can be specified to influence the sort order of models presented to the CLI in auto selection mode
fix: model definition re-introduced into the schema but intelligently and only a summary is generated per tool. Required to ensure CLI calls and uses the correct model
fix: removed `model` param from some tools where this wasn't needed
fix: fixed adherence to `*_ALLOWED_MODELS` by advertising only the allowed models to the CLI
fix: removed duplicates across providers when passing canonical names back to the CLI; the first enabled provider wins
2025-10-02 21:43:44 +04:00
Fahad
a254ff2220 refactor: removed method from provider, should use model capabilities instead
refactor: cleanup temperature factory method
2025-10-02 11:08:56 +04:00
Fahad
9c11ecc4bf refactor: clean temperature inference 2025-10-02 10:41:05 +04:00
Fahad
6d237d0970 refactor: moved temperature method from base provider to model capabilities
refactor: model listing cleanup, moved logic to model_capabilities.py
docs: added AGENTS.md for onboarding Codex
2025-10-02 10:25:41 +04:00
Fahad
f461cb4519 refactor: moved temperature method from base provider to model capabilities
docs: added AGENTS.md for onboarding Codex
2025-10-02 09:17:36 +04:00
Fahad
2b10adcaf2 refactor: removed hook from base class, turned into helper static method instead 2025-10-02 08:45:59 +04:00
Fahad
182aa627df refactor: code cleanup 2025-10-02 08:09:44 +04:00