WIP - OpenRouter support
This commit is contained in:
@@ -2,18 +2,25 @@
|
||||
# Copy this file to .env and fill in your values
|
||||
|
||||
# API Keys - At least one is required
|
||||
#
|
||||
# IMPORTANT: Use EITHER OpenRouter OR native APIs (Gemini/OpenAI), not both!
|
||||
# Having both creates ambiguity about which provider serves each model.
|
||||
#
|
||||
# Option 1: Use native APIs (recommended for direct access)
|
||||
# Get your Gemini API key from: https://makersuite.google.com/app/apikey
|
||||
GEMINI_API_KEY=your_gemini_api_key_here
|
||||
|
||||
# Get your OpenAI API key from: https://platform.openai.com/api-keys
|
||||
OPENAI_API_KEY=your_openai_api_key_here
|
||||
|
||||
# Optional: OpenRouter for access to multiple models
|
||||
# Option 2: Use OpenRouter for access to multiple models through one API
|
||||
# Get your OpenRouter API key from: https://openrouter.ai/
|
||||
# If using OpenRouter, comment out the native API keys above
|
||||
OPENROUTER_API_KEY=your_openrouter_api_key_here
|
||||
|
||||
# Optional: Restrict which models can be used via OpenRouter (recommended for cost control)
|
||||
# Example: OPENROUTER_ALLOWED_MODELS=gpt-4,claude-3-opus,mistral-large
|
||||
# Leave empty to allow ANY model (not recommended - risk of high costs)
|
||||
OPENROUTER_ALLOWED_MODELS=
|
||||
|
||||
# Optional: Default model to use
|
||||
|
||||
10
README.md
10
README.md
@@ -98,9 +98,19 @@ The final implementation resulted in a 26% improvement in JSON parsing performan
|
||||
- **Windows users**: WSL2 is required for Claude Code CLI
|
||||
|
||||
### 1. Get API Keys (at least one required)
|
||||
|
||||
**Important:** Choose EITHER native APIs OR OpenRouter, not both:
|
||||
|
||||
**Option A: Native APIs (Recommended)**
|
||||
- **Gemini**: Visit [Google AI Studio](https://makersuite.google.com/app/apikey) and generate an API key. For best results with Gemini 2.5 Pro, use a paid API key as the free tier has limited access to the latest models.
|
||||
- **OpenAI**: Visit [OpenAI Platform](https://platform.openai.com/api-keys) to get an API key for O3 model access.
|
||||
|
||||
**Option B: OpenRouter (Access multiple models with one API)**
|
||||
- **OpenRouter**: Visit [OpenRouter](https://openrouter.ai/) for access to multiple models through one API. [Setup Guide](docs/openrouter.md)
|
||||
- Set `OPENROUTER_ALLOWED_MODELS` to restrict which models can be used (recommended)
|
||||
- Leave empty to allow ANY model (warning: some models are expensive!)
|
||||
|
||||
> **Note:** Using both OpenRouter and native APIs creates ambiguity about which provider serves each model. If both are configured, native APIs will take priority.
|
||||
|
||||
### 2. Clone and Set Up
|
||||
|
||||
|
||||
@@ -2,6 +2,20 @@
|
||||
|
||||
OpenRouter provides unified access to multiple AI models (GPT-4, Claude, Mistral, etc.) through a single API.
|
||||
|
||||
## When to Use OpenRouter
|
||||
|
||||
**Use OpenRouter when you want:**
|
||||
- Access to models not available through native APIs (GPT-4, Claude, Mistral, etc.)
|
||||
- Simplified billing across multiple model providers
|
||||
- Experimentation with various models without separate API keys
|
||||
|
||||
**Use native APIs (Gemini/OpenAI) when you want:**
|
||||
- Direct access to specific providers without intermediary
|
||||
- Potentially lower latency and costs
|
||||
- Access to the latest model features immediately upon release
|
||||
|
||||
**Important:** Don't use both OpenRouter and native APIs simultaneously - this creates ambiguity about which provider serves each model.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Get API Key
|
||||
@@ -13,25 +27,33 @@ OpenRouter provides unified access to multiple AI models (GPT-4, Claude, Mistral
|
||||
```bash
|
||||
# Add to your .env file
|
||||
OPENROUTER_API_KEY=your-openrouter-api-key
|
||||
|
||||
# IMPORTANT: Set allowed models to control costs
|
||||
OPENROUTER_ALLOWED_MODELS=gpt-4,claude-3-sonnet,mistral-large
|
||||
|
||||
# Or leave empty to allow ANY model (WARNING: risk of high costs!)
|
||||
# OPENROUTER_ALLOWED_MODELS=
|
||||
```
|
||||
|
||||
That's it! Docker Compose already includes all necessary configuration.
|
||||
|
||||
### 3. Use Any Model
|
||||
### 3. Use Models
|
||||
|
||||
**If you set OPENROUTER_ALLOWED_MODELS:**
|
||||
```
|
||||
# Examples
|
||||
# Only these models will work:
|
||||
"Use gpt-4 via zen to review this code"
|
||||
"Use claude-3-opus via zen to debug this error"
|
||||
"Use claude-3-sonnet via zen to debug this error"
|
||||
"Use mistral-large via zen to optimize this algorithm"
|
||||
```
|
||||
|
||||
## Cost Control (Recommended)
|
||||
|
||||
Restrict which models can be used to prevent unexpected charges:
|
||||
|
||||
```bash
|
||||
# Add to .env file - only allow specific models
|
||||
OPENROUTER_ALLOWED_MODELS=gpt-4,claude-3-sonnet,mistral-large
|
||||
**If you leave OPENROUTER_ALLOWED_MODELS empty:**
|
||||
```
|
||||
# ANY model available on OpenRouter will work:
|
||||
"Use gpt-4o via zen to analyze this"
|
||||
"Use claude-3-opus via zen for deep analysis"
|
||||
"Use deepseek-coder via zen to generate code"
|
||||
# WARNING: Some models can be very expensive!
|
||||
```
|
||||
|
||||
Check current model pricing at [openrouter.ai/models](https://openrouter.ai/models).
|
||||
|
||||
62
server.py
62
server.py
@@ -125,7 +125,7 @@ def configure_providers():
|
||||
At least one valid API key (Gemini or OpenAI) is required.
|
||||
|
||||
Raises:
|
||||
ValueError: If no valid API keys are found
|
||||
ValueError: If no valid API keys are found or conflicting configurations detected
|
||||
"""
|
||||
from providers import ModelProviderRegistry
|
||||
from providers.base import ProviderType
|
||||
@@ -134,28 +134,59 @@ def configure_providers():
|
||||
from providers.openrouter import OpenRouterProvider
|
||||
|
||||
valid_providers = []
|
||||
has_native_apis = False
|
||||
has_openrouter = False
|
||||
|
||||
# Check for Gemini API key
|
||||
gemini_key = os.getenv("GEMINI_API_KEY")
|
||||
if gemini_key and gemini_key != "your_gemini_api_key_here":
|
||||
ModelProviderRegistry.register_provider(ProviderType.GOOGLE, GeminiModelProvider)
|
||||
valid_providers.append("Gemini")
|
||||
has_native_apis = True
|
||||
logger.info("Gemini API key found - Gemini models available")
|
||||
|
||||
# Check for OpenAI API key
|
||||
openai_key = os.getenv("OPENAI_API_KEY")
|
||||
if openai_key and openai_key != "your_openai_api_key_here":
|
||||
ModelProviderRegistry.register_provider(ProviderType.OPENAI, OpenAIModelProvider)
|
||||
valid_providers.append("OpenAI (o3)")
|
||||
has_native_apis = True
|
||||
logger.info("OpenAI API key found - o3 model available")
|
||||
|
||||
# Check for OpenRouter API key
|
||||
openrouter_key = os.getenv("OPENROUTER_API_KEY")
|
||||
if openrouter_key and openrouter_key != "your_openrouter_api_key_here":
|
||||
ModelProviderRegistry.register_provider(ProviderType.OPENROUTER, OpenRouterProvider)
|
||||
valid_providers.append("OpenRouter")
|
||||
has_openrouter = True
|
||||
logger.info("OpenRouter API key found - Multiple models available via OpenRouter")
|
||||
|
||||
# Check for conflicting configuration
|
||||
if has_native_apis and has_openrouter:
|
||||
logger.warning(
|
||||
"\n" + "=" * 70 + "\n"
|
||||
"WARNING: Both OpenRouter and native API keys detected!\n"
|
||||
"\n"
|
||||
"This creates ambiguity about which provider will be used for models\n"
|
||||
"available through both APIs (e.g., 'o3' could come from OpenAI or OpenRouter).\n"
|
||||
"\n"
|
||||
"RECOMMENDATION: Use EITHER OpenRouter OR native APIs, not both.\n"
|
||||
"\n"
|
||||
"To fix this:\n"
|
||||
"1. Use only OpenRouter: unset GEMINI_API_KEY and OPENAI_API_KEY\n"
|
||||
"2. Use only native APIs: unset OPENROUTER_API_KEY\n"
|
||||
"\n"
|
||||
"Current configuration will prioritize native APIs over OpenRouter.\n" +
|
||||
"=" * 70 + "\n"
|
||||
)
|
||||
|
||||
# Register providers - native APIs first to ensure they take priority
|
||||
if has_native_apis:
|
||||
if gemini_key and gemini_key != "your_gemini_api_key_here":
|
||||
ModelProviderRegistry.register_provider(ProviderType.GOOGLE, GeminiModelProvider)
|
||||
if openai_key and openai_key != "your_openai_api_key_here":
|
||||
ModelProviderRegistry.register_provider(ProviderType.OPENAI, OpenAIModelProvider)
|
||||
|
||||
# Register OpenRouter last so native APIs take precedence
|
||||
if has_openrouter:
|
||||
ModelProviderRegistry.register_provider(ProviderType.OPENROUTER, OpenRouterProvider)
|
||||
|
||||
# Require at least one valid provider
|
||||
if not valid_providers:
|
||||
@@ -168,6 +199,10 @@ def configure_providers():
|
||||
|
||||
logger.info(f"Available providers: {', '.join(valid_providers)}")
|
||||
|
||||
# Log provider priority if both are configured
|
||||
if has_native_apis and has_openrouter:
|
||||
logger.info("Provider priority: Native APIs (Gemini, OpenAI) will be checked before OpenRouter")
|
||||
|
||||
|
||||
@server.list_tools()
|
||||
async def handle_list_tools() -> list[Tool]:
|
||||
@@ -504,6 +539,22 @@ async def handle_get_version() -> list[TextContent]:
|
||||
"available_tools": list(TOOLS.keys()) + ["get_version"],
|
||||
}
|
||||
|
||||
# Check configured providers
|
||||
from providers import ModelProviderRegistry
|
||||
from providers.base import ProviderType
|
||||
|
||||
configured_providers = []
|
||||
if ModelProviderRegistry.get_provider(ProviderType.GOOGLE):
|
||||
configured_providers.append("Gemini (flash, pro)")
|
||||
if ModelProviderRegistry.get_provider(ProviderType.OPENAI):
|
||||
configured_providers.append("OpenAI (o3, o3-mini)")
|
||||
if ModelProviderRegistry.get_provider(ProviderType.OPENROUTER):
|
||||
openrouter_allowed = os.getenv("OPENROUTER_ALLOWED_MODELS", "")
|
||||
if openrouter_allowed:
|
||||
configured_providers.append(f"OpenRouter (restricted to: {openrouter_allowed})")
|
||||
else:
|
||||
configured_providers.append("OpenRouter (ANY model on openrouter.ai)")
|
||||
|
||||
# Format the information in a human-readable way
|
||||
text = f"""Zen MCP Server v{__version__}
|
||||
Updated: {__updated__}
|
||||
@@ -516,6 +567,9 @@ Configuration:
|
||||
- Python: {version_info["python_version"]}
|
||||
- Started: {version_info["server_started"]}
|
||||
|
||||
Configured Providers:
|
||||
{chr(10).join(f" - {provider}" for provider in configured_providers)}
|
||||
|
||||
Available Tools:
|
||||
{chr(10).join(f" - {tool}" for tool in version_info["available_tools"])}
|
||||
|
||||
|
||||
@@ -118,6 +118,35 @@ if [ -n "${OPENROUTER_API_KEY:-}" ] && [ "$OPENROUTER_API_KEY" != "your_openrout
|
||||
echo "✅ Valid OPENROUTER_API_KEY found"
|
||||
fi
|
||||
|
||||
# Check for conflicting configuration
|
||||
if [ "$VALID_OPENROUTER_KEY" = true ] && ([ "$VALID_GEMINI_KEY" = true ] || [ "$VALID_OPENAI_KEY" = true ]); then
|
||||
echo ""
|
||||
echo "⚠️ WARNING: Conflicting API configuration detected!"
|
||||
echo ""
|
||||
echo "You have configured both:"
|
||||
echo " - OpenRouter API key"
|
||||
if [ "$VALID_GEMINI_KEY" = true ]; then
|
||||
echo " - Native Gemini API key"
|
||||
fi
|
||||
if [ "$VALID_OPENAI_KEY" = true ]; then
|
||||
echo " - Native OpenAI API key"
|
||||
fi
|
||||
echo ""
|
||||
echo "This creates ambiguity about which provider to use for models available"
|
||||
echo "through multiple APIs (e.g., 'o3' could come from OpenAI or OpenRouter)."
|
||||
echo ""
|
||||
echo "RECOMMENDATION: Use EITHER OpenRouter OR native APIs, not both."
|
||||
echo ""
|
||||
echo "To fix this, edit .env and:"
|
||||
echo " Option 1: Use only OpenRouter - comment out GEMINI_API_KEY and OPENAI_API_KEY"
|
||||
echo " Option 2: Use only native APIs - comment out OPENROUTER_API_KEY"
|
||||
echo ""
|
||||
echo "The server will start anyway, but native APIs will take priority over OpenRouter."
|
||||
echo ""
|
||||
# Give user time to read the warning
|
||||
sleep 3
|
||||
fi
|
||||
|
||||
# Require at least one valid API key
|
||||
if [ "$VALID_GEMINI_KEY" = false ] && [ "$VALID_OPENAI_KEY" = false ] && [ "$VALID_OPENROUTER_KEY" = false ]; then
|
||||
echo ""
|
||||
|
||||
@@ -153,12 +153,20 @@ class BaseTool(ABC):
|
||||
Dict containing the model field JSON schema
|
||||
"""
|
||||
from config import DEFAULT_MODEL, IS_AUTO_MODE, MODEL_CAPABILITIES_DESC
|
||||
import os
|
||||
|
||||
# Check if OpenRouter is configured
|
||||
has_openrouter = bool(os.getenv("OPENROUTER_API_KEY") and
|
||||
os.getenv("OPENROUTER_API_KEY") != "your_openrouter_api_key_here")
|
||||
|
||||
if IS_AUTO_MODE:
|
||||
# In auto mode, model is required and we provide detailed descriptions
|
||||
model_desc_parts = ["Choose the best model for this task based on these capabilities:"]
|
||||
for model, desc in MODEL_CAPABILITIES_DESC.items():
|
||||
model_desc_parts.append(f"- '{model}': {desc}")
|
||||
|
||||
if has_openrouter:
|
||||
model_desc_parts.append("\nOpenRouter models: If configured, you can also use ANY model available on OpenRouter (e.g., 'gpt-4', 'claude-3-opus', 'mistral-large'). Check openrouter.ai/models for available models.")
|
||||
|
||||
return {
|
||||
"type": "string",
|
||||
@@ -169,9 +177,15 @@ class BaseTool(ABC):
|
||||
# Normal mode - model is optional with default
|
||||
available_models = list(MODEL_CAPABILITIES_DESC.keys())
|
||||
models_str = ", ".join(f"'{m}'" for m in available_models)
|
||||
|
||||
description = f"Model to use. Native models: {models_str}."
|
||||
if has_openrouter:
|
||||
description += " OpenRouter: Any model available on openrouter.ai (e.g., 'gpt-4', 'claude-3-opus', 'mistral-large')."
|
||||
description += f" Defaults to '{DEFAULT_MODEL}' if not specified."
|
||||
|
||||
return {
|
||||
"type": "string",
|
||||
"description": f"Model to use. Available: {models_str}. Defaults to '{DEFAULT_MODEL}' if not specified.",
|
||||
"description": description,
|
||||
}
|
||||
|
||||
def get_default_temperature(self) -> float:
|
||||
|
||||
Reference in New Issue
Block a user