feat: add GPT-5-Codex support with Responses API integration
Adds support for OpenAI's GPT-5-Codex model which uses the new Responses API endpoint (/v1/responses) instead of the standard Chat Completions API. Changes: - Add GPT-5-Codex to MODEL_CAPABILITIES with 400K context, 128K output - Prioritize GPT-5-Codex for EXTENDED_REASONING tasks - Add aliases: codex, gpt5-codex, gpt-5-code - Update tests to expect GPT-5-Codex for extended reasoning Benefits: - 40-80% cost savings through Responses API caching - 3% better performance on coding tasks (SWE-bench) - Leverages existing dual-API infrastructure
This commit is contained in:
@@ -98,7 +98,7 @@ class TestAutoModeProviderSelection:
|
||||
balanced = ModelProviderRegistry.get_preferred_fallback_model(ToolModelCategory.BALANCED)
|
||||
|
||||
# Should select appropriate OpenAI models based on new preference order
|
||||
assert extended_reasoning == "o3" # O3 for extended reasoning
|
||||
assert extended_reasoning == "gpt-5-codex" # GPT-5-Codex prioritized for extended reasoning
|
||||
assert fast_response == "gpt-5" # gpt-5 comes first in fast response preference
|
||||
assert balanced == "gpt-5" # gpt-5 for balanced
|
||||
|
||||
|
||||
Reference in New Issue
Block a user