Commit Graph

13 Commits

Author SHA1 Message Date
Fahad
514c9c58fc feat: grok-4.1 support https://github.com/BeehiveInnovations/pal-mcp-server/issues/339 2025-12-11 20:08:17 +00:00
Fahad
8b16405f06 feat: GPT-5.2 support 2025-12-11 19:11:50 +00:00
Fahad
b2dc84992d fix: rebranding, see [docs/name-change.md](docs/name-change.md) for details 2025-12-04 18:15:14 +04:00
Fahad
19a2a89b12 fix: failing test for gemini 3.0 pro open router 2025-11-18 20:50:42 +04:00
Fahad
bbfdfac511 feat: Gemini 3.0 Pro Preview for Open Router 2025-11-18 20:44:22 +04:00
Bjorn Melin
8e9aa2304d feat: add new GPT-5.1 models to configuration files and update model selection logic in OpenAI provider 2025-11-14 01:35:11 -07:00
Fahad
ece8a5ebed feat!: Full code can now be generated by an external model and shared with the AI tool (Claude Code / Codex etc)!
model definitions now support a new `allow_code_generation` flag, only to be used with higher reasoning models such as GPT-5-Pro and-Gemini 2.5-Pro

 When `true`, the `chat` tool can now request the external model to generate a full implementation / update / instructions etc and then share the implementation with the calling agent.

 This effectively allows us to utilize more powerful models such as GPT-5-Pro to generate code for us or entire implementations (which are either API-only or part of the $200 Pro plan from within the ChatGPT app)
2025-10-07 18:49:13 +04:00
Lachlan Donald
abed075b2e feat: add support for openai/gpt-5-pro model
Adds configuration for the new GPT-5 Pro model released by OpenAI.

Specifications from official docs:
- 400K context window
- 272K max output tokens (largest of GPT-5 family)
- Highest reasoning capability (5/5)
- Reasoning token support
- Vision input support (text+image input, text output only)
- No temperature support (fixed, like other reasoning models)
- Available via Responses API only (use_openai_response_api: true)
- Default reasoning effort: high

Accessible via alias 'gpt5pro'.
2025-10-07 16:02:37 +11:00
Fahad
a33efbde52 fix: use CUSTOM_CONNECT_TIMEOUT for gemini too
feat: add grok-4 to openrouter_models.json
2025-10-06 23:23:24 +04:00
Fahad
ff9a07a37a feat!: breaking change - OpenRouter models are now read from conf/openrouter_models.json while Custom / Self-hosted models are read from conf/custom_models.json
feat: Azure OpenAI / Azure AI Foundry support. Models should be defined in conf/azure_models.json (or a custom path). See .env.example for environment variables or see readme. https://github.com/BeehiveInnovations/zen-mcp-server/issues/265

feat: OpenRouter / Custom Models / Azure can separately also use custom config paths now (see .env.example )

refactor: Model registry class made abstract, OpenRouter / Custom Provider / Azure OpenAI now subclass these

refactor: breaking change: `is_custom` property has been removed from model_capabilities.py (and thus custom_models.json) given each models are now read from separate configuration files
2025-10-04 21:10:56 +04:00
Fahad
f44ca326ef Breaking change: openrouter_models.json -> custom_models.json
* Support for Custom URLs and custom models, including locally hosted models such as ollama
* Support for native + openrouter + local models (i.e. dozens of models) means you can start delegating sub-tasks to particular models or work to local models such as localizations or other boring work etc.
* Several tests added
* precommit to also include untracked (new) files
* Logfile auto rollover
* Improved logging
2025-06-13 15:22:09 +04:00
Fahad
8cbbe94417 New openrouter tests
Fixed flash aliases
More models
2025-06-13 07:00:53 +04:00
Fahad
cd1105b741 WIP
- OpenRouter model configuration registry
- Model definition file for users to be able to control
- Update instructions
2025-06-13 05:52:26 +04:00