fix: use CUSTOM_CONNECT_TIMEOUT for gemini too

feat: add grok-4 to openrouter_models.json
This commit is contained in:
Fahad
2025-10-06 23:23:24 +04:00
parent a65485a1e5
commit a33efbde52
5 changed files with 68 additions and 11 deletions

View File

@@ -75,7 +75,7 @@ DEFAULT_MODEL=auto # Claude picks best model for each task (recommended)
- **`o3-mini`**: Balanced speed/quality (200K context)
- **`o4-mini`**: Latest reasoning model, optimized for shorter contexts
- **`grok-3`**: GROK-3 advanced reasoning (131K context)
- **`grok-4-latest`**: GROK-4 latest flagship model (256K context)
- **`grok-4`**: GROK-4 flagship model (256K context)
- **Custom models**: via OpenRouter or local APIs
### Thinking Mode Configuration
@@ -108,7 +108,7 @@ OPENAI_ALLOWED_MODELS=o3-mini,o4-mini,mini
GOOGLE_ALLOWED_MODELS=flash,pro
# X.AI GROK model restrictions
XAI_ALLOWED_MODELS=grok-3,grok-3-fast,grok-4-latest
XAI_ALLOWED_MODELS=grok-3,grok-3-fast,grok-4
# OpenRouter model restrictions (affects models via custom provider)
OPENROUTER_ALLOWED_MODELS=opus,sonnet,mistral
@@ -129,11 +129,11 @@ OPENROUTER_ALLOWED_MODELS=opus,sonnet,mistral
- `pro` (shorthand for Pro model)
**X.AI GROK Models:**
- `grok-4-latest` (256K context, latest flagship model with reasoning, vision, and structured outputs)
- `grok-4` (256K context, flagship Grok model with reasoning, vision, and structured outputs)
- `grok-3` (131K context, advanced reasoning)
- `grok-3-fast` (131K context, higher performance)
- `grok` (shorthand for grok-4-latest)
- `grok4` (shorthand for grok-4-latest)
- `grok` (shorthand for grok-4)
- `grok4` (shorthand for grok-4)
- `grok3` (shorthand for grok-3)
- `grokfast` (shorthand for grok-3-fast)