feat!: Full code can now be generated by an external model and shared with the AI tool (Claude Code / Codex etc)!
model definitions now support a new `allow_code_generation` flag, only to be used with higher reasoning models such as GPT-5-Pro and-Gemini 2.5-Pro When `true`, the `chat` tool can now request the external model to generate a full implementation / update / instructions etc and then share the implementation with the calling agent. This effectively allows us to utilize more powerful models such as GPT-5-Pro to generate code for us or entire implementations (which are either API-only or part of the $200 Pro plan from within the ChatGPT app)
This commit is contained in:
@@ -20,7 +20,8 @@
|
||||
"use_openai_response_api": "Set to true when the model must use the /responses endpoint (reasoning models like GPT-5 Pro). Leave false/omit for standard chat completions.",
|
||||
"default_reasoning_effort": "Default reasoning effort level for models that support it (e.g., 'low', 'medium', 'high'). Omit if not applicable.",
|
||||
"description": "Human-readable description of the model",
|
||||
"intelligence_score": "1-20 human rating used as the primary signal for auto-mode model ordering"
|
||||
"intelligence_score": "1-20 human rating used as the primary signal for auto-mode model ordering",
|
||||
"allow_code_generation": "Whether this model can generate and suggest fully working code - complete with functions, files, and detailed implementation instructions - for your AI tool to use right away. Only set this to 'true' for a model more capable than the AI model / CLI you're currently using."
|
||||
}
|
||||
},
|
||||
"models": [
|
||||
@@ -44,6 +45,7 @@
|
||||
"supports_json_mode": true,
|
||||
"supports_images": true,
|
||||
"supports_temperature": true,
|
||||
"allow_code_generation": true,
|
||||
"max_image_size_mb": 32.0
|
||||
},
|
||||
{
|
||||
|
||||
Reference in New Issue
Block a user