fix: use CUSTOM_CONNECT_TIMEOUT for gemini too
feat: add grok-4 to openrouter_models.json
This commit is contained in:
@@ -44,7 +44,7 @@ Regardless of your default configuration, you can specify models per request:
|
||||
| **`gpt5`** (GPT-5) | OpenAI | 400K tokens | Advanced model with reasoning support | Complex problems requiring advanced reasoning |
|
||||
| **`gpt5-mini`** (GPT-5 Mini) | OpenAI | 400K tokens | Efficient variant with reasoning | Balanced performance and capability |
|
||||
| **`gpt5-nano`** (GPT-5 Nano) | OpenAI | 400K tokens | Fastest, cheapest GPT-5 variant | Summarization and classification tasks |
|
||||
| **`grok-4-latest`** | X.AI | 256K tokens | Latest flagship model with reasoning, vision | Complex analysis, reasoning tasks |
|
||||
| **`grok-4`** | X.AI | 256K tokens | Latest flagship Grok model with reasoning, vision | Complex analysis, reasoning tasks |
|
||||
| **`grok-3`** | X.AI | 131K tokens | Advanced reasoning model | Deep analysis, complex problems |
|
||||
| **`grok-3-fast`** | X.AI | 131K tokens | Higher performance variant | Fast responses with reasoning |
|
||||
| **`llama`** (Llama 3.2) | Custom/Local | 128K tokens | Local inference, privacy | On-device analysis, cost-free processing |
|
||||
|
||||
@@ -75,7 +75,7 @@ DEFAULT_MODEL=auto # Claude picks best model for each task (recommended)
|
||||
- **`o3-mini`**: Balanced speed/quality (200K context)
|
||||
- **`o4-mini`**: Latest reasoning model, optimized for shorter contexts
|
||||
- **`grok-3`**: GROK-3 advanced reasoning (131K context)
|
||||
- **`grok-4-latest`**: GROK-4 latest flagship model (256K context)
|
||||
- **`grok-4`**: GROK-4 flagship model (256K context)
|
||||
- **Custom models**: via OpenRouter or local APIs
|
||||
|
||||
### Thinking Mode Configuration
|
||||
@@ -108,7 +108,7 @@ OPENAI_ALLOWED_MODELS=o3-mini,o4-mini,mini
|
||||
GOOGLE_ALLOWED_MODELS=flash,pro
|
||||
|
||||
# X.AI GROK model restrictions
|
||||
XAI_ALLOWED_MODELS=grok-3,grok-3-fast,grok-4-latest
|
||||
XAI_ALLOWED_MODELS=grok-3,grok-3-fast,grok-4
|
||||
|
||||
# OpenRouter model restrictions (affects models via custom provider)
|
||||
OPENROUTER_ALLOWED_MODELS=opus,sonnet,mistral
|
||||
@@ -129,11 +129,11 @@ OPENROUTER_ALLOWED_MODELS=opus,sonnet,mistral
|
||||
- `pro` (shorthand for Pro model)
|
||||
|
||||
**X.AI GROK Models:**
|
||||
- `grok-4-latest` (256K context, latest flagship model with reasoning, vision, and structured outputs)
|
||||
- `grok-4` (256K context, flagship Grok model with reasoning, vision, and structured outputs)
|
||||
- `grok-3` (131K context, advanced reasoning)
|
||||
- `grok-3-fast` (131K context, higher performance)
|
||||
- `grok` (shorthand for grok-4-latest)
|
||||
- `grok4` (shorthand for grok-4-latest)
|
||||
- `grok` (shorthand for grok-4)
|
||||
- `grok4` (shorthand for grok-4)
|
||||
- `grok3` (shorthand for grok-3)
|
||||
- `grokfast` (shorthand for grok-3-fast)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user