fix: rebranding, see [docs/name-change.md](docs/name-change.md) for details

This commit is contained in:
Fahad
2025-12-04 18:11:55 +04:00
parent bcfaccecd4
commit b2dc84992d
122 changed files with 1423 additions and 1056 deletions

View File

@@ -1,6 +1,6 @@
# Custom Models & API Setup
This guide covers setting up multiple AI model providers including OpenRouter, custom API endpoints, and local model servers. The Zen MCP server supports a unified configuration for all these providers through a single model registry.
This guide covers setting up multiple AI model providers including OpenRouter, custom API endpoints, and local model servers. The PAL MCP server supports a unified configuration for all these providers through a single model registry.
## Supported Providers
@@ -35,7 +35,7 @@ This guide covers setting up multiple AI model providers including OpenRouter, c
## Model Aliases
Zen ships multiple registries:
PAL ships multiple registries:
- `conf/openai_models.json` native OpenAI catalogue (override with `OPENAI_MODELS_CONFIG_PATH`)
- `conf/gemini_models.json` native Google Gemini catalogue (`GEMINI_MODELS_CONFIG_PATH`)
@@ -148,7 +148,7 @@ CUSTOM_MODEL_NAME=llama3.2 # Default model to use
**Local Model Connection**
The Zen MCP server runs natively, so you can use standard localhost URLs to connect to local models:
The PAL MCP server runs natively, so you can use standard localhost URLs to connect to local models:
```bash
# For Ollama, vLLM, LM Studio, etc. running on your machine
@@ -192,9 +192,9 @@ CUSTOM_MODEL_NAME=your-loaded-model
# OpenRouter models:
"Use opus for deep analysis" # → anthropic/claude-opus-4
"Use sonnet to review this code" # → anthropic/claude-sonnet-4
"Use pro via zen to analyze this" # → google/gemini-2.5-pro
"Use gpt4o via zen to analyze this" # → openai/gpt-4o
"Use mistral via zen to optimize" # → mistral/mistral-large
"Use pro via pal to analyze this" # → google/gemini-2.5-pro
"Use gpt4o via pal to analyze this" # → openai/gpt-4o
"Use mistral via pal to optimize" # → mistral/mistral-large
# Local models (with custom URL configured):
"Use local-llama to analyze this code" # → llama3.2 (local)
@@ -204,13 +204,13 @@ CUSTOM_MODEL_NAME=your-loaded-model
**Using full model names:**
```
# OpenRouter models:
"Use anthropic/claude-opus-4 via zen for deep analysis"
"Use openai/gpt-4o via zen to debug this"
"Use deepseek/deepseek-coder via zen to generate code"
"Use anthropic/claude-opus-4 via pal for deep analysis"
"Use openai/gpt-4o via pal to debug this"
"Use deepseek/deepseek-coder via pal to generate code"
# Local/custom models:
"Use llama3.2 via zen to review this"
"Use meta-llama/Llama-2-7b-chat-hf via zen to analyze"
"Use llama3.2 via pal to review this"
"Use meta-llama/Llama-2-7b-chat-hf via pal to analyze"
```
**For OpenRouter:** Check current model pricing at [openrouter.ai/models](https://openrouter.ai/models).