feat!: breaking change - OpenRouter models are now read from conf/openrouter_models.json while Custom / Self-hosted models are read from conf/custom_models.json
feat: Azure OpenAI / Azure AI Foundry support. Models should be defined in conf/azure_models.json (or a custom path). See .env.example for environment variables or see readme. https://github.com/BeehiveInnovations/zen-mcp-server/issues/265 feat: OpenRouter / Custom Models / Azure can separately also use custom config paths now (see .env.example ) refactor: Model registry class made abstract, OpenRouter / Custom Provider / Azure OpenAI now subclass these refactor: breaking change: `is_custom` property has been removed from model_capabilities.py (and thus custom_models.json) given each models are now read from separate configuration files
This commit is contained in:
@@ -17,6 +17,15 @@ GEMINI_API_KEY=your_gemini_api_key_here
|
||||
# Get your OpenAI API key from: https://platform.openai.com/api-keys
|
||||
OPENAI_API_KEY=your_openai_api_key_here
|
||||
|
||||
# Azure OpenAI mirrors OpenAI models through Azure-hosted deployments
|
||||
# Set the endpoint from Azure Portal. Models are defined in conf/azure_models.json
|
||||
# (or the file referenced by AZURE_MODELS_CONFIG_PATH).
|
||||
AZURE_OPENAI_API_KEY=your_azure_openai_key_here
|
||||
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
|
||||
# AZURE_OPENAI_API_VERSION=2024-02-15-preview
|
||||
# AZURE_OPENAI_ALLOWED_MODELS=gpt-4o,gpt-4o-mini
|
||||
# AZURE_MODELS_CONFIG_PATH=/absolute/path/to/custom_azure_models.json
|
||||
|
||||
# Get your X.AI API key from: https://console.x.ai/
|
||||
XAI_API_KEY=your_xai_api_key_here
|
||||
|
||||
|
||||
Reference in New Issue
Block a user