Breaking change: openrouter_models.json -> custom_models.json
* Support for Custom URLs and custom models, including locally hosted models such as ollama * Support for native + openrouter + local models (i.e. dozens of models) means you can start delegating sub-tasks to particular models or work to local models such as localizations or other boring work etc. * Several tests added * precommit to also include untracked (new) files * Logfile auto rollover * Improved logging
This commit is contained in:
27
.env.example
27
.env.example
@@ -1,6 +1,11 @@
|
||||
# Zen MCP Server Environment Configuration
|
||||
# Copy this file to .env and fill in your values
|
||||
|
||||
# Required: Workspace root directory for file access
|
||||
# This should be the HOST path that contains all files Claude might reference
|
||||
# Defaults to $HOME for direct usage, auto-configured for Docker
|
||||
WORKSPACE_ROOT=/Users/your-username
|
||||
|
||||
# API Keys - At least one is required
|
||||
#
|
||||
# IMPORTANT: Use EITHER OpenRouter OR native APIs (Gemini/OpenAI), not both!
|
||||
@@ -18,10 +23,13 @@ OPENAI_API_KEY=your_openai_api_key_here
|
||||
# If using OpenRouter, comment out the native API keys above
|
||||
OPENROUTER_API_KEY=your_openrouter_api_key_here
|
||||
|
||||
# Optional: Restrict which models can be used via OpenRouter (recommended for cost control)
|
||||
# Example: OPENROUTER_ALLOWED_MODELS=gpt-4,claude-3-opus,mistral-large
|
||||
# Leave empty to allow ANY model (not recommended - risk of high costs)
|
||||
OPENROUTER_ALLOWED_MODELS=
|
||||
# Option 3: Use custom API endpoints for local models (Ollama, vLLM, LM Studio, etc.)
|
||||
# IMPORTANT: Since this server ALWAYS runs in Docker, you MUST use host.docker.internal instead of localhost
|
||||
# ❌ WRONG: http://localhost:11434/v1 (Docker containers cannot reach localhost)
|
||||
# ✅ CORRECT: http://host.docker.internal:11434/v1 (Docker can reach host services)
|
||||
CUSTOM_API_URL=http://host.docker.internal:11434/v1 # Ollama example (NOT localhost!)
|
||||
CUSTOM_API_KEY= # Empty for Ollama (no auth needed)
|
||||
CUSTOM_MODEL_NAME=llama3.2 # Default model name
|
||||
|
||||
# Optional: Default model to use
|
||||
# Options: 'auto' (Claude picks best model), 'pro', 'flash', 'o3', 'o3-mini'
|
||||
@@ -41,10 +49,13 @@ DEFAULT_MODEL=auto
|
||||
# Defaults to 'high' if not specified
|
||||
DEFAULT_THINKING_MODE_THINKDEEP=high
|
||||
|
||||
# Optional: Workspace root directory for file access
|
||||
# This should be the HOST path that contains all files Claude might reference
|
||||
# Defaults to $HOME for direct usage, auto-configured for Docker
|
||||
WORKSPACE_ROOT=/Users/your-username
|
||||
# Optional: Custom model configuration file path
|
||||
# Override the default location of custom_models.json
|
||||
# CUSTOM_MODELS_CONFIG_PATH=/path/to/your/custom_models.json
|
||||
|
||||
# Optional: Redis configuration (auto-configured for Docker)
|
||||
# The Redis URL for conversation threading - typically managed by docker-compose
|
||||
# REDIS_URL=redis://redis:6379/0
|
||||
|
||||
# Optional: Logging level (DEBUG, INFO, WARNING, ERROR)
|
||||
# DEBUG: Shows detailed operational messages for troubleshooting
|
||||
|
||||
Reference in New Issue
Block a user