model definitions now support a new `allow_code_generation` flag, only to be used with higher reasoning models such as GPT-5-Pro and-Gemini 2.5-Pro
When `true`, the `chat` tool can now request the external model to generate a full implementation / update / instructions etc and then share the implementation with the calling agent.
This effectively allows us to utilize more powerful models such as GPT-5-Pro to generate code for us or entire implementations (which are either API-only or part of the $200 Pro plan from within the ChatGPT app)
- Add missing models to all tool parameter documentation
- Update model table in advanced-usage.md with GPT-5 series
- Add Gemini 2.0 Flash and Flash Lite models
- Include detailed capabilities for each model variant
- Fix model parameter consistency across all tool docs
Models added:
- GPT-5 (gpt5): Advanced reasoning with 400K context
- GPT-5 Mini (gpt5-mini): Efficient variant
- GPT-5 Nano (gpt5-nano): Fast, low-cost variant
- Gemini 2.0 Flash (flash-2.0): Audio/video support
- Gemini 2.0 Flash Lite (flashlite): Text-only lightweight