docs/openrouter-sync-operations #1
@@ -79,7 +79,8 @@ DEFAULT_MODEL=auto # Claude picks best model for each task (recommended)
|
||||
- `conf/openai_models.json` – OpenAI catalogue (can be overridden with `OPENAI_MODELS_CONFIG_PATH`)
|
||||
- `conf/gemini_models.json` – Gemini catalogue (`GEMINI_MODELS_CONFIG_PATH`)
|
||||
- `conf/xai_models.json` – X.AI / GROK catalogue (`XAI_MODELS_CONFIG_PATH`)
|
||||
- `conf/openrouter_models.json` – OpenRouter catalogue (`OPENROUTER_MODELS_CONFIG_PATH`)
|
||||
- `conf/openrouter_models.json` – Curated OpenRouter overrides (`OPENROUTER_MODELS_CONFIG_PATH`)
|
||||
- `conf/openrouter_models_live.json` – Generated live OpenRouter catalogue (`OPENROUTER_LIVE_MODELS_CONFIG_PATH`)
|
||||
- `conf/dial_models.json` – DIAL aggregation catalogue (`DIAL_MODELS_CONFIG_PATH`)
|
||||
- `conf/custom_models.json` – Custom/OpenAI-compatible endpoints (`CUSTOM_MODELS_CONFIG_PATH`)
|
||||
|
||||
@@ -92,10 +93,10 @@ DEFAULT_MODEL=auto # Claude picks best model for each task (recommended)
|
||||
| OpenAI | `gpt-5.2`, `gpt-5.1-codex`, `gpt-5.1-codex-mini`, `gpt-5`, `gpt-5.2-pro`, `gpt-5-mini`, `gpt-5-nano`, `gpt-5-codex`, `gpt-4.1`, `o3`, `o3-mini`, `o3-pro`, `o4-mini` | `gpt5.2`, `gpt-5.2`, `5.2`, `gpt5.1-codex`, `codex-5.1`, `codex-mini`, `gpt5`, `gpt5pro`, `mini`, `nano`, `codex`, `o3mini`, `o3pro`, `o4mini` |
|
||||
| Gemini | `gemini-2.5-pro`, `gemini-2.5-flash`, `gemini-2.0-flash`, `gemini-2.0-flash-lite` | `pro`, `gemini-pro`, `flash`, `flash-2.0`, `flashlite` |
|
||||
| X.AI | `grok-4`, `grok-4.1-fast` | `grok`, `grok4`, `grok-4.1-fast-reasoning` |
|
||||
| OpenRouter | See `conf/openrouter_models.json` for the continually evolving catalogue | e.g., `opus`, `sonnet`, `flash`, `pro`, `mistral` |
|
||||
| OpenRouter | Generated live catalogue plus curated overrides | e.g., `opus`, `sonnet`, `flash`, `pro`, `mistral` |
|
||||
| Custom | User-managed entries such as `llama3.2` | Define your own aliases per entry |
|
||||
|
||||
Latest OpenAI entries (`gpt-5.2`, `gpt-5.1-codex`, `gpt-5.1-codex-mini`, `gpt-5.2-pro`) expose 400K-token contexts with large outputs, reasoning-token support, and multimodal inputs. `gpt-5.1-codex` and `gpt-5.2-pro` are Responses-only with streaming disabled, while the base `gpt-5.2` and Codex mini support streaming along with full code-generation flags. Update your manifests if you run custom deployments so these capability bits stay accurate.
|
||||
Latest OpenAI entries (`gpt-5.2`, `gpt-5.1-codex`, `gpt-5.1-codex-mini`, `gpt-5.2-pro`) expose 400K-token contexts with large outputs, reasoning-token support, and multimodal inputs. `gpt-5.1-codex` and `gpt-5.2-pro` are Responses-only with streaming disabled, while the base `gpt-5.2` and Codex mini support streaming along with full code-generation flags. For OpenRouter, keep PAL-specific metadata in the curated manifest and regenerate the live catalogue when OpenRouter adds or removes models; see [Refreshing the Live OpenRouter Catalogue](custom_models.md#refreshing-the-live-openrouter-catalogue).
|
||||
|
||||
> **Tip:** Copy the JSON file you need, customise it, and point the corresponding `*_MODELS_CONFIG_PATH` environment variable to your version. This lets you enable or disable capabilities (JSON mode, function calling, temperature support, code generation) without editing Python.
|
||||
|
||||
|
||||
@@ -221,7 +221,7 @@ CUSTOM_MODEL_NAME=your-loaded-model
|
||||
The system automatically routes models to the appropriate provider:
|
||||
|
||||
1. Entries in `conf/custom_models.json` → Always routed through the Custom API (requires `CUSTOM_API_URL`)
|
||||
2. Entries in `conf/openrouter_models.json` → Routed through OpenRouter (requires `OPENROUTER_API_KEY`)
|
||||
2. Entries in `conf/openrouter_models_live.json` and `conf/openrouter_models.json` → Routed through OpenRouter (requires `OPENROUTER_API_KEY`)
|
||||
3. **Unknown models** → Fallback logic based on model name patterns
|
||||
|
||||
**Provider Priority Order:**
|
||||
@@ -241,7 +241,42 @@ These JSON files define model aliases and capabilities. You can:
|
||||
|
||||
### Adding Custom Models
|
||||
|
||||
Edit `conf/openrouter_models.json` to tweak OpenRouter behaviour or `conf/custom_models.json` to add local models. Each entry maps directly onto [`ModelCapabilities`](../providers/shared/model_capabilities.py).
|
||||
Edit `conf/openrouter_models.json` to tweak OpenRouter behaviour or `conf/custom_models.json` to add local models. The generated `conf/openrouter_models_live.json` file is discovery data from OpenRouter's `/api/v1/models` endpoint; curated entries in `conf/openrouter_models.json` override those generated defaults. Each entry maps directly onto [`ModelCapabilities`](../providers/shared/model_capabilities.py).
|
||||
|
||||
### Refreshing the Live OpenRouter Catalogue
|
||||
|
||||
Run the sync script whenever OpenRouter adds or removes models that you want `listmodels` and provider enumeration to expose, or before cutting a release that should include an updated OpenRouter catalogue.
|
||||
|
||||
```bash
|
||||
source .pal_venv/bin/activate
|
||||
python scripts/sync_openrouter_models.py
|
||||
```
|
||||
|
||||
By default the script:
|
||||
|
||||
- fetches `https://openrouter.ai/api/v1/models`
|
||||
- writes conservative discovery data to `conf/openrouter_models_live.json`
|
||||
- leaves `conf/openrouter_models.json` untouched
|
||||
|
||||
Use the optional flags if you need to test against a different endpoint or write to a different file:
|
||||
|
||||
```bash
|
||||
python scripts/sync_openrouter_models.py --url https://openrouter.ai/api/v1/models --output conf/openrouter_models_live.json
|
||||
```
|
||||
|
||||
Important runtime behavior:
|
||||
|
||||
- `conf/openrouter_models_live.json` is the generated baseline catalogue
|
||||
- `conf/openrouter_models.json` is the curated override layer for aliases and PAL-specific capability flags
|
||||
- curated entries win when the same `model_name` appears in both files
|
||||
- models missing from the curated file are still available from the generated catalogue
|
||||
|
||||
After refreshing the catalogue:
|
||||
|
||||
1. Review the diff in `conf/openrouter_models_live.json`
|
||||
2. Add or update curated entries in `conf/openrouter_models.json` if a new model needs aliases or PAL-specific capability tweaks
|
||||
3. Restart the server so the updated manifests are reloaded
|
||||
4. Commit the generated JSON alongside any curated overrides so other contributors get the same catalogue state
|
||||
|
||||
#### Adding an OpenRouter Model
|
||||
|
||||
|
||||
Reference in New Issue
Block a user