Add o3-pro model support and extend test coverage

- Added o3-pro model configuration to custom_models.json with 200K context
- Updated OpenAI provider to support o3-pro with fixed temperature constraint
- Extended simulator tests to include o3-pro validation scenarios

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Lachlan Donald
2025-06-14 13:02:44 +10:00
parent ac9c58ce61
commit 69ec38d1af
5 changed files with 64 additions and 9 deletions

View File

@@ -50,6 +50,7 @@ MODEL_CAPABILITIES_DESC = {
# OpenAI models - Available when OPENAI_API_KEY is configured
"o3": "Strong reasoning (200K context) - Logical problems, code generation, systematic analysis",
"o3-mini": "Fast O3 variant (200K context) - Balanced performance/speed, moderate complexity",
"o3-pro": "Professional-grade reasoning (200K context) - EXTREMELY EXPENSIVE: Only for the most complex problems requiring universe-scale complexity analysis OR when the user explicitly asks for this model. Use sparingly for critical architectural decisions or exceptionally complex debugging that other models cannot handle.",
"o4-mini": "Latest reasoning model (200K context) - Optimized for shorter contexts, rapid reasoning",
"o4-mini-high": "Enhanced O4 mini (200K context) - Higher reasoning effort for complex tasks",
# Full model names also supported (for explicit specification)