feat: Add comprehensive GPT-5 series model support

- Add GPT-5, GPT-5-mini, and GPT-5-nano models to unified configuration
- Implement proper thinking mode support via dynamic capability checking
- Add OpenAI provider model enumeration methods for registry integration
- Update tests to cover all GPT-5 models and their aliases
- Fix critical bug where thinking mode was hardcoded instead of using model capabilities

Breaking Changes:
- None (backward compatible)

New Models Available:
- gpt-5 (400K context, 128K output, reasoning support)
- gpt-5-mini (400K context, 128K output, efficient variant)
- gpt-5-nano (400K context, fastest/cheapest variant)

Aliases:
- gpt5, gpt5-mini, gpt5mini, gpt5-nano, gpt5nano, nano

All models support:
- Extended thinking mode (reasoning tokens)
- Vision capabilities
- JSON mode
- Function calling

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
David Knedlik
2025-08-21 14:27:00 -05:00
parent 12542054a2
commit 4930824052
3 changed files with 76 additions and 10 deletions

View File

@@ -228,6 +228,48 @@
"temperature_constraint": "fixed",
"description": "OpenAI's o4-mini model - optimized for shorter contexts with rapid reasoning and vision"
},
{
"model_name": "gpt-5",
"aliases": ["gpt5", "gpt-5"],
"context_window": 400000,
"max_output_tokens": 128000,
"supports_extended_thinking": true,
"supports_json_mode": true,
"supports_function_calling": true,
"supports_images": true,
"max_image_size_mb": 20.0,
"supports_temperature": true,
"temperature_constraint": "fixed",
"description": "GPT-5 (400K context, 128K output) - Advanced model with reasoning support"
},
{
"model_name": "gpt-5-mini",
"aliases": ["gpt5-mini", "gpt5mini", "mini"],
"context_window": 400000,
"max_output_tokens": 128000,
"supports_extended_thinking": true,
"supports_json_mode": true,
"supports_function_calling": true,
"supports_images": true,
"max_image_size_mb": 20.0,
"supports_temperature": true,
"temperature_constraint": "fixed",
"description": "GPT-5-mini (400K context, 128K output) - Efficient variant with reasoning support"
},
{
"model_name": "gpt-5-nano",
"aliases": ["gpt5nano", "gpt5-nano", "nano"],
"context_window": 400000,
"max_output_tokens": 128000,
"supports_extended_thinking": true,
"supports_json_mode": true,
"supports_function_calling": true,
"supports_images": true,
"max_image_size_mb": 20.0,
"supports_temperature": true,
"temperature_constraint": "fixed",
"description": "GPT-5 nano (400K context) - Fastest, cheapest version of GPT-5 for summarization and classification tasks"
},
{
"model_name": "llama3.2",
"aliases": ["local-llama", "local", "llama3.2", "ollama-llama"],