feat: grok-4.1 support https://github.com/BeehiveInnovations/pal-mcp-server/issues/339
This commit is contained in:
@@ -48,8 +48,7 @@ Regardless of your default configuration, you can specify models per request:
|
||||
| **`gpt5-mini`** (GPT-5 Mini) | OpenAI | 400K tokens | Efficient variant with reasoning | Balanced performance and capability |
|
||||
| **`gpt5-nano`** (GPT-5 Nano) | OpenAI | 400K tokens | Fastest, cheapest GPT-5 variant | Summarization and classification tasks |
|
||||
| **`grok-4`** | X.AI | 256K tokens | Latest flagship Grok model with reasoning, vision | Complex analysis, reasoning tasks |
|
||||
| **`grok-3`** | X.AI | 131K tokens | Advanced reasoning model | Deep analysis, complex problems |
|
||||
| **`grok-3-fast`** | X.AI | 131K tokens | Higher performance variant | Fast responses with reasoning |
|
||||
| **`grok-4.1-fast-reasoning`** | X.AI | 2M tokens | High-performance Grok 4.1 Fast Reasoning with vision | Fast responses and light reasoning |
|
||||
| **`llama`** (Llama 3.2) | Custom/Local | 128K tokens | Local inference, privacy | On-device analysis, cost-free processing |
|
||||
| **Any model** | OpenRouter | Varies | Access to GPT-4, Claude, Llama, etc. | User-specified or based on task requirements |
|
||||
|
||||
@@ -72,8 +71,7 @@ cloud models (expensive/powerful) AND local models (free/private) in the same co
|
||||
- **GPT-5**: Full-featured with reasoning support and vision
|
||||
- **GPT-5 Mini**: Balanced efficiency and capability
|
||||
- **GPT-5 Nano**: Optimized for fast, low-cost tasks
|
||||
- **Grok-4**: Extended thinking support, vision capabilities, 256K context
|
||||
- **Grok-3 Models**: Advanced reasoning, 131K context
|
||||
- **Grok-4 / Grok-4.1-fast-reasoning**: Extended thinking support, vision capabilities (256K / 2M context)
|
||||
|
||||
## Model Usage Restrictions
|
||||
|
||||
|
||||
Reference in New Issue
Block a user