Support for allowed model restrictions per provider
Tool escalation added to `analyze` to a graceful switch over to codereview is made when absolutely necessary
This commit is contained in:
31
.env.example
31
.env.example
@@ -32,7 +32,7 @@ CUSTOM_API_KEY= # Empty for Ollama (no auth
|
||||
CUSTOM_MODEL_NAME=llama3.2 # Default model name
|
||||
|
||||
# Optional: Default model to use
|
||||
# Options: 'auto' (Claude picks best model), 'pro', 'flash', 'o3', 'o3-mini'
|
||||
# Options: 'auto' (Claude picks best model), 'pro', 'flash', 'o3', 'o3-mini', 'o4-mini', 'o4-mini-high' etc
|
||||
# When set to 'auto', Claude will select the best model for each task
|
||||
# Defaults to 'auto' if not specified
|
||||
DEFAULT_MODEL=auto
|
||||
@@ -49,6 +49,35 @@ DEFAULT_MODEL=auto
|
||||
# Defaults to 'high' if not specified
|
||||
DEFAULT_THINKING_MODE_THINKDEEP=high
|
||||
|
||||
# Optional: Model usage restrictions
|
||||
# Limit which models can be used from each provider for cost control, compliance, or standardization
|
||||
# Format: Comma-separated list of allowed model names (case-insensitive, whitespace tolerant)
|
||||
# Empty or unset = all models allowed (default behavior)
|
||||
# If you want to disable a provider entirely, don't set its API key
|
||||
#
|
||||
# Supported OpenAI models:
|
||||
# - o3 (200K context, high reasoning)
|
||||
# - o3-mini (200K context, balanced)
|
||||
# - o4-mini (200K context, latest balanced, temperature=1.0 only)
|
||||
# - o4-mini-high (200K context, enhanced reasoning, temperature=1.0 only)
|
||||
# - mini (shorthand for o4-mini)
|
||||
#
|
||||
# Supported Google/Gemini models:
|
||||
# - gemini-2.5-flash-preview-05-20 (1M context, fast, supports thinking)
|
||||
# - gemini-2.5-pro-preview-06-05 (1M context, powerful, supports thinking)
|
||||
# - flash (shorthand for gemini-2.5-flash-preview-05-20)
|
||||
# - pro (shorthand for gemini-2.5-pro-preview-06-05)
|
||||
#
|
||||
# Examples:
|
||||
# OPENAI_ALLOWED_MODELS=o3-mini,o4-mini,mini # Only allow mini models (cost control)
|
||||
# GOOGLE_ALLOWED_MODELS=flash # Only allow Flash (fast responses)
|
||||
# OPENAI_ALLOWED_MODELS=o4-mini # Single model standardization
|
||||
# GOOGLE_ALLOWED_MODELS=flash,pro # Allow both Gemini models
|
||||
#
|
||||
# Note: These restrictions apply even in 'auto' mode - Claude will only pick from allowed models
|
||||
# OPENAI_ALLOWED_MODELS=
|
||||
# GOOGLE_ALLOWED_MODELS=
|
||||
|
||||
# Optional: Custom model configuration file path
|
||||
# Override the default location of custom_models.json
|
||||
# CUSTOM_MODELS_CONFIG_PATH=/path/to/your/custom_models.json
|
||||
|
||||
Reference in New Issue
Block a user