feat: Add comprehensive dynamic configuration system v3.3.0
## Major Features Added ### 🎯 Dynamic Configuration System - **Environment-aware model selection**: DEFAULT_MODEL with 'pro'/'flash' shortcuts - **Configurable thinking modes**: DEFAULT_THINKING_MODE_THINKDEEP for extended reasoning - **All tool schemas now dynamic**: Show actual current defaults instead of hardcoded values - **Enhanced setup workflow**: Copy from .env.example with smart customization ### 🔧 Model & Thinking Configuration - **Smart model resolution**: Support both shortcuts ('pro', 'flash') and full model names - **Thinking mode optimization**: Only apply thinking budget to models that support it - **Flash model compatibility**: Works without thinking config, still beneficial via system prompts - **Dynamic schema descriptions**: Tool parameters show current environment values ### 🚀 Enhanced Developer Experience - **Fail-fast Docker setup**: GEMINI_API_KEY required upfront in docker-compose - **Comprehensive startup logging**: Shows current model and thinking mode defaults - **Enhanced get_version tool**: Reports all dynamic configuration values - **Better .env documentation**: Clear token consumption details and model options ### 🧪 Comprehensive Testing - **Live model validation**: New simulator test validates Pro vs Flash thinking behavior - **Dynamic configuration tests**: Verify environment variable overrides work correctly - **Complete test coverage**: All 139 unit tests pass, including new model config tests ### 📋 Configuration Files Updated - **docker-compose.yml**: Fail-fast API key validation, thinking mode support - **setup-docker.sh**: Copy from .env.example instead of manual creation - **.env.example**: Detailed documentation with token consumption per thinking mode - **.gitignore**: Added test-setup/ for cleanup ### 🛠 Technical Improvements - **Removed setup.py**: Fully Docker-based deployment (no longer needed) - **REDIS_URL smart defaults**: Auto-configured for Docker, still configurable for dev - **All tools updated**: Consistent dynamic model parameter descriptions - **Enhanced error handling**: Better model resolution and validation ## Breaking Changes - Removed setup.py (Docker-only deployment) - Model parameter descriptions now show actual defaults (dynamic) ## Migration Guide - Update .env files using new .env.example format - Use 'pro'/'flash' shortcuts or full model names - Set DEFAULT_THINKING_MODE_THINKDEEP for custom thinking depth 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
15
config.py
15
config.py
@@ -13,15 +13,15 @@ import os
|
||||
# Version and metadata
|
||||
# These values are used in server responses and for tracking releases
|
||||
# IMPORTANT: This is the single source of truth for version and author info
|
||||
# setup.py imports these values to avoid duplication
|
||||
__version__ = "3.2.0" # Semantic versioning: MAJOR.MINOR.PATCH
|
||||
__updated__ = "2025-06-10" # Last update date in ISO format
|
||||
__version__ = "3.3.0" # Semantic versioning: MAJOR.MINOR.PATCH
|
||||
__updated__ = "2025-06-11" # Last update date in ISO format
|
||||
__author__ = "Fahad Gilani" # Primary maintainer
|
||||
|
||||
# Model configuration
|
||||
# GEMINI_MODEL: The Gemini model used for all AI operations
|
||||
# DEFAULT_MODEL: The default model used for all AI operations
|
||||
# This should be a stable, high-performance model suitable for code analysis
|
||||
GEMINI_MODEL = "gemini-2.5-pro-preview-06-05"
|
||||
# Can be overridden by setting DEFAULT_MODEL environment variable
|
||||
DEFAULT_MODEL = os.getenv("DEFAULT_MODEL", "gemini-2.5-pro-preview-06-05")
|
||||
|
||||
# Token allocation for Gemini Pro (1M total capacity)
|
||||
# MAX_CONTEXT_TOKENS: Total model capacity
|
||||
@@ -48,6 +48,11 @@ TEMPERATURE_BALANCED = 0.5 # For general chat
|
||||
# Used when brainstorming, exploring alternatives, or architectural discussions
|
||||
TEMPERATURE_CREATIVE = 0.7 # For architecture, deep thinking
|
||||
|
||||
# Thinking Mode Defaults
|
||||
# DEFAULT_THINKING_MODE_THINKDEEP: Default thinking depth for extended reasoning tool
|
||||
# Higher modes use more computational budget but provide deeper analysis
|
||||
DEFAULT_THINKING_MODE_THINKDEEP = os.getenv("DEFAULT_THINKING_MODE_THINKDEEP", "high")
|
||||
|
||||
# MCP Protocol Limits
|
||||
# MCP_PROMPT_SIZE_LIMIT: Maximum character size for prompts sent directly through MCP
|
||||
# The MCP protocol has a combined request+response limit of ~25K tokens.
|
||||
|
||||
Reference in New Issue
Block a user