Migration from Docker to Standalone Python Server (#73)

* Migration from docker to standalone server
Migration handling
Fixed tests
Use simpler in-memory storage
Support for concurrent logging to disk
Simplified direct connections to localhost

* Migration from docker / redis to standalone script
Updated tests
Updated run script
Fixed requirements
Use dotenv
Ask if user would like to install MCP in Claude Desktop once
Updated docs

* More cleanup and references to docker removed

* Cleanup

* Comments

* Fixed tests

* Fix GitHub Actions workflow for standalone Python architecture

- Install requirements-dev.txt for pytest and testing dependencies
- Remove Docker setup from simulation tests (now standalone)
- Simplify linting job to use requirements-dev.txt
- Update simulation tests to run directly without Docker

Fixes unit test failures in CI due to missing pytest dependency.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Remove simulation tests from GitHub Actions

- Removed simulation-tests job that makes real API calls
- Keep only unit tests (mocked, no API costs) and linting
- Simulation tests should be run manually with real API keys
- Reduces CI costs and complexity

GitHub Actions now only runs:
- Unit tests (569 tests, all mocked)
- Code quality checks (ruff, black)

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fixed tests

* Fixed tests

---------

Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
Beehive Innovations
2025-06-18 23:41:22 +04:00
committed by GitHub
parent 9d72545ecd
commit 4151c3c3a5
121 changed files with 2842 additions and 3168 deletions

View File

@@ -320,32 +320,7 @@ def _get_api_key_for_provider(cls, provider_type: ProviderType) -> Optional[str]
# ... rest of the method
```
### 4. Configure Docker Environment Variables
**CRITICAL**: You must add your provider's environment variables to `docker-compose.yml` for them to be available in the Docker container.
Add your API key and restriction variables to the `environment` section:
```yaml
services:
zen-mcp:
# ... other configuration ...
environment:
- GEMINI_API_KEY=${GEMINI_API_KEY:-}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- EXAMPLE_API_KEY=${EXAMPLE_API_KEY:-} # Add this line
# OpenRouter support
- OPENROUTER_API_KEY=${OPENROUTER_API_KEY:-}
# ... other variables ...
# Model usage restrictions
- OPENAI_ALLOWED_MODELS=${OPENAI_ALLOWED_MODELS:-}
- GOOGLE_ALLOWED_MODELS=${GOOGLE_ALLOWED_MODELS:-}
- EXAMPLE_ALLOWED_MODELS=${EXAMPLE_ALLOWED_MODELS:-} # Add this line
```
⚠️ **Without this step**, the Docker container won't have access to your environment variables, and your provider won't be registered even if the API key is set in your `.env` file.
### 5. Register Provider in server.py
### 4. Register Provider in server.py
The `configure_providers()` function in `server.py` handles provider registration. You need to:
@@ -672,7 +647,7 @@ if __name__ == "__main__":
```
The simulator test is crucial because it:
- Validates your provider works in the actual Docker environment
- Validates your provider works in the actual server environment
- Tests real API integration, not just mocked behavior
- Verifies model name resolution works correctly
- Checks conversation continuity across requests
@@ -799,7 +774,7 @@ Before submitting your PR:
- [ ] Provider implementation complete with all required methods
- [ ] API key mapping added to `_get_api_key_for_provider()` in `providers/registry.py`
- [ ] Provider added to `PROVIDER_PRIORITY_ORDER` in `registry.py` (if native provider)
- [ ] **Environment variables added to `docker-compose.yml`** (API key and restrictions)
- [ ] **Environment variables added to `.env` file** (API key and restrictions)
- [ ] Provider imported and registered in `server.py`'s `configure_providers()`
- [ ] API key checking added to `configure_providers()` function
- [ ] Error message updated to include new provider

View File

@@ -239,9 +239,9 @@ All tools that work with files support **both individual files and entire direct
**The Zen MCP Server's most revolutionary feature** is its ability to maintain conversation context even after Claude's memory resets. This enables truly persistent AI collaboration across multiple sessions and context boundaries.
### 🔥 **The Breakthrough**
### **The Breakthrough**
Even when Claude's context resets or compacts, conversations can continue seamlessly because other models (O3, Gemini) have access to the complete conversation history stored in Redis and can "remind" Claude of everything that was discussed.
Even when Claude's context resets or compacts, conversations can continue seamlessly because other models (O3, Gemini) have access to the complete conversation history stored in memory and can "remind" Claude of everything that was discussed.
### Key Benefits

View File

@@ -12,7 +12,7 @@ This server enables **true AI collaboration** between Claude and multiple AI mod
- **Cross-tool continuation** - Start with one tool (e.g., `analyze`) and continue with another (e.g., `codereview`) using the same conversation thread
- **Both AIs coordinate their approaches** - questioning assumptions, validating solutions, and building on each other's insights
- Each conversation maintains full context while only sending incremental updates
- Conversations are automatically managed with Redis for persistence
- Conversations are automatically managed in memory for the session duration
## Example: Multi-Model AI Coordination
@@ -52,7 +52,7 @@ This server enables **true AI collaboration** between Claude and multiple AI mod
**Conversation Management:**
- Up to 10 exchanges per conversation (configurable via `MAX_CONVERSATION_TURNS`)
- 3-hour expiry (configurable via `CONVERSATION_TIMEOUT_HOURS`)
- Thread-safe with Redis persistence across all tools
- Thread-safe with in-memory persistence across all tools
- **Image context preservation** - Images and visual references are maintained across conversation turns and tool switches
## Cross-Tool & Cross-Model Continuation Example

View File

@@ -19,11 +19,6 @@ OPENAI_API_KEY=your-openai-key
**Workspace Root:**
```env
# Required: Workspace root directory for file access
WORKSPACE_ROOT=/Users/your-username
```
- Path that contains all files Claude might reference
- Defaults to `$HOME` for direct usage, auto-configured for Docker
### API Keys (At least one required)
@@ -55,15 +50,14 @@ OPENROUTER_API_KEY=your_openrouter_api_key_here
**Option 3: Custom API Endpoints (Local models)**
```env
# For Ollama, vLLM, LM Studio, etc.
# IMPORTANT: Use host.docker.internal, NOT localhost (Docker requirement)
CUSTOM_API_URL=http://host.docker.internal:11434/v1 # Ollama example
CUSTOM_API_URL=http://localhost:11434/v1 # Ollama example
CUSTOM_API_KEY= # Empty for Ollama
CUSTOM_MODEL_NAME=llama3.2 # Default model
```
**Docker Network Requirements:**
- ❌ WRONG: `http://localhost:11434/v1` (Docker containers cannot reach localhost)
- ✅ CORRECT: `http://host.docker.internal:11434/v1` (Docker can reach host services)
**Local Model Connection:**
- Use standard localhost URLs since the server runs natively
- Example: `http://localhost:11434/v1` for Ollama
### Model Configuration
@@ -165,16 +159,12 @@ XAI_ALLOWED_MODELS=grok,grok-3-fast
CUSTOM_MODELS_CONFIG_PATH=/path/to/your/custom_models.json
```
**Redis Configuration:**
```env
# Redis URL for conversation threading (auto-configured for Docker)
REDIS_URL=redis://redis:6379/0
```
**Conversation Settings:**
```env
# How long AI-to-AI conversation threads persist (hours)
CONVERSATION_TIMEOUT_HOURS=3
# How long AI-to-AI conversation threads persist in memory (hours)
# Conversations are auto-purged when claude closes its MCP connection or
# when a session is quit / re-launched
CONVERSATION_TIMEOUT_HOURS=5
# Maximum conversation turns (each exchange = 2 turns)
MAX_CONVERSATION_TURNS=20
@@ -215,7 +205,7 @@ CONVERSATION_TIMEOUT_HOURS=3
```env
# Local models only
DEFAULT_MODEL=llama3.2
CUSTOM_API_URL=http://host.docker.internal:11434/v1
CUSTOM_API_URL=http://localhost:11434/v1
CUSTOM_API_KEY=
CUSTOM_MODEL_NAME=llama3.2
LOG_LEVEL=DEBUG
@@ -232,9 +222,9 @@ LOG_LEVEL=INFO
## Important Notes
**Docker Networking:**
- Always use `host.docker.internal` instead of `localhost` for custom APIs
- The server runs in Docker and cannot access `localhost` directly
**Local Networking:**
- Use standard localhost URLs for local models
- The server runs as a native Python process
**API Key Priority:**
- Native APIs take priority over OpenRouter when both are configured

View File

@@ -8,9 +8,7 @@ Thank you for your interest in contributing to Zen MCP Server! This guide will h
2. **Clone your fork** locally
3. **Set up the development environment**:
```bash
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
./run-server.sh
```
4. **Create a feature branch** from `main`:
```bash
@@ -28,9 +26,6 @@ We maintain high code quality standards. **All contributions must pass our autom
Before submitting any PR, run our automated quality check script:
```bash
# Activate virtual environment first
source venv/bin/activate
# Run the comprehensive quality checks script
./code_quality_checks.sh
```
@@ -78,7 +73,7 @@ python communication_simulator_test.py
2. **Tool changes require simulator tests**:
- Add simulator tests in `simulator_tests/` for new or modified tools
- Use realistic prompts that demonstrate the feature
- Validate output through Docker logs
- Validate output through server logs
3. **Bug fixes require regression tests**:
- Add a test that would have caught the bug
@@ -94,7 +89,7 @@ python communication_simulator_test.py
Your PR title MUST follow one of these formats:
**Version Bumping Prefixes** (trigger Docker build + version bump):
**Version Bumping Prefixes** (trigger version bump):
- `feat: <description>` - New features (MINOR version bump)
- `fix: <description>` - Bug fixes (PATCH version bump)
- `breaking: <description>` or `BREAKING CHANGE: <description>` - Breaking changes (MAJOR version bump)
@@ -108,10 +103,9 @@ Your PR title MUST follow one of these formats:
- `ci: <description>` - CI/CD changes
- `style: <description>` - Code style changes
**Docker Build Options**:
- `docker: <description>` - Force Docker build without version bump
- `docs+docker: <description>` - Documentation + Docker build
- `chore+docker: <description>` - Maintenance + Docker build
**Other Options**:
- `docs: <description>` - Documentation changes only
- `chore: <description>` - Maintenance tasks
#### PR Checklist
@@ -216,7 +210,7 @@ isort .
### Test Failures
- Check test output for specific errors
- Run individual tests for debugging: `pytest tests/test_specific.py -xvs`
- Ensure Docker is running for simulator tests
- Ensure server environment is set up for simulator tests
### Import Errors
- Verify virtual environment is activated

View File

@@ -80,7 +80,7 @@ OPENROUTER_API_KEY=your-openrouter-api-key
> **Note:** Control which models can be used directly in your OpenRouter dashboard at [openrouter.ai](https://openrouter.ai/).
> This gives you centralized control over model access and spending limits.
That's it! Docker Compose already includes all necessary configuration.
That's it! The setup script handles all necessary configuration automatically.
### Option 2: Custom API Setup (Ollama, vLLM, etc.)
@@ -102,49 +102,46 @@ python -m vllm.entrypoints.openai.api_server --model meta-llama/Llama-2-7b-chat-
#### 2. Configure Environment Variables
```bash
# Add to your .env file
CUSTOM_API_URL=http://host.docker.internal:11434/v1 # Ollama example
CUSTOM_API_URL=http://localhost:11434/v1 # Ollama example
CUSTOM_API_KEY= # Empty for Ollama (no auth needed)
CUSTOM_MODEL_NAME=llama3.2 # Default model to use
```
**Important: Docker URL Configuration**
**Local Model Connection**
Since the Zen MCP server always runs in Docker, you must use `host.docker.internal` instead of `localhost` to connect to local models running on your host machine:
The Zen MCP server runs natively, so you can use standard localhost URLs to connect to local models:
```bash
# For Ollama, vLLM, LM Studio, etc. running on your host machine
CUSTOM_API_URL=http://host.docker.internal:11434/v1 # Ollama default port (NOT localhost!)
# For Ollama, vLLM, LM Studio, etc. running on your machine
CUSTOM_API_URL=http://localhost:11434/v1 # Ollama default port
```
**Never use:** `http://localhost:11434/v1` - Docker containers cannot reach localhost
**Always use:** `http://host.docker.internal:11434/v1` - This allows Docker to access host services
#### 3. Examples for Different Platforms
**Ollama:**
```bash
CUSTOM_API_URL=http://host.docker.internal:11434/v1
CUSTOM_API_URL=http://localhost:11434/v1
CUSTOM_API_KEY=
CUSTOM_MODEL_NAME=llama3.2
```
**vLLM:**
```bash
CUSTOM_API_URL=http://host.docker.internal:8000/v1
CUSTOM_API_URL=http://localhost:8000/v1
CUSTOM_API_KEY=
CUSTOM_MODEL_NAME=meta-llama/Llama-2-7b-chat-hf
```
**LM Studio:**
```bash
CUSTOM_API_URL=http://host.docker.internal:1234/v1
CUSTOM_API_URL=http://localhost:1234/v1
CUSTOM_API_KEY=lm-studio # Or any value, LM Studio often requires some key
CUSTOM_MODEL_NAME=local-model
```
**text-generation-webui (with OpenAI extension):**
```bash
CUSTOM_API_URL=http://host.docker.internal:5001/v1
CUSTOM_API_URL=http://localhost:5001/v1
CUSTOM_API_KEY=
CUSTOM_MODEL_NAME=your-loaded-model
```

View File

@@ -11,49 +11,59 @@ The easiest way to monitor logs is to use the `-f` flag when starting the server
This will start the server and immediately begin tailing the MCP server logs.
## Viewing Logs in Docker
To monitor MCP server activity in real-time:
```bash
# Follow MCP server logs (recommended)
docker exec zen-mcp-server tail -f -n 500 /tmp/mcp_server.log
# Or use the -f flag when starting the server
./run-server.sh -f
```
**Note**: Due to MCP protocol limitations, container logs don't show tool execution details. Always use the commands above for debugging.
## Log Files
Logs are stored in the container's `/tmp/` directory and rotate daily at midnight, keeping 7 days of history:
Logs are stored in the `logs/` directory within your project folder:
- **`mcp_server.log`** - Main server operations
- **`mcp_activity.log`** - Tool calls and conversations
- **`mcp_server_overflow.log`** - Overflow protection for large logs
- **`mcp_server.log`** - Main server operations, API calls, and errors
- **`mcp_activity.log`** - Tool calls and conversation tracking
## Accessing Log Files
Log files rotate automatically when they reach 20MB, keeping up to 10 rotated files.
To access log files directly:
## Viewing Logs
To monitor MCP server activity:
```bash
# Enter the container
docker exec -it zen-mcp-server /bin/sh
# Follow logs in real-time
tail -f logs/mcp_server.log
# View current logs
cat /tmp/mcp_server.log
cat /tmp/mcp_activity.log
# View last 100 lines
tail -n 100 logs/mcp_server.log
# View previous days (with date suffix)
cat /tmp/mcp_server.log.2024-06-14
# View activity logs (tool calls only)
tail -f logs/mcp_activity.log
# Search for specific patterns
grep "ERROR" logs/mcp_server.log
grep "tool_name" logs/mcp_activity.log
```
## Log Level
Set verbosity with `LOG_LEVEL` in your `.env` file or docker-compose.yml:
Set verbosity with `LOG_LEVEL` in your `.env` file:
```yaml
environment:
- LOG_LEVEL=DEBUG # Options: DEBUG, INFO, WARNING, ERROR
```
```env
# Options: DEBUG, INFO, WARNING, ERROR
LOG_LEVEL=INFO
```
- **DEBUG**: Detailed information for debugging
- **INFO**: General operational messages (default)
- **WARNING**: Warning messages
- **ERROR**: Only error messages
## Log Format
Logs use a standardized format with timestamps:
```
2024-06-14 10:30:45,123 - module.name - INFO - Message here
```
## Tips
- Use `./run-server.sh -f` for the easiest log monitoring experience
- Activity logs show only tool-related events for cleaner output
- Main server logs include all operational details
- Logs persist across server restarts

View File

@@ -5,9 +5,7 @@ This project includes comprehensive test coverage through unit tests and integra
## Running Tests
### Prerequisites
- Python virtual environment activated: `source venv/bin/activate`
- All dependencies installed: `pip install -r requirements.txt`
- Docker containers running (for simulator tests): `./run-server.sh`
- Environment set up: `./run-server.sh`
- Use `./run-server.sh -f` to automatically follow logs after starting
### Unit Tests
@@ -23,9 +21,9 @@ python -m pytest tests/test_providers.py -xvs
### Simulator Tests
Simulator tests replicate real-world Claude CLI interactions with the MCP server running in Docker. Unlike unit tests that test isolated functions, simulator tests validate the complete end-to-end flow including:
Simulator tests replicate real-world Claude CLI interactions with the standalone MCP server. Unlike unit tests that test isolated functions, simulator tests validate the complete end-to-end flow including:
- Actual MCP protocol communication
- Docker container interactions
- Standalone server interactions
- Multi-turn conversations across tools
- Log output validation
@@ -33,7 +31,7 @@ Simulator tests replicate real-world Claude CLI interactions with the MCP server
#### Monitoring Logs During Tests
**Important**: The MCP stdio protocol interferes with stderr output during tool execution. While server startup logs appear in `docker compose logs`, tool execution logs are only written to file-based logs inside the container. This is a known limitation of the stdio-based MCP protocol and cannot be fixed without changing the MCP implementation.
**Important**: The MCP stdio protocol interferes with stderr output during tool execution. Tool execution logs are written to local log files. This is a known limitation of the stdio-based MCP protocol.
To monitor logs during test execution:
@@ -42,20 +40,20 @@ To monitor logs during test execution:
./run-server.sh -f
# Or manually monitor main server logs (includes all tool execution details)
docker exec zen-mcp-server tail -f -n 500 /tmp/mcp_server.log
tail -f -n 500 logs/mcp_server.log
# Monitor MCP activity logs (tool calls and completions)
docker exec zen-mcp-server tail -f /tmp/mcp_activity.log
tail -f logs/mcp_activity.log
# Check log file sizes (logs rotate at 20MB)
docker exec zen-mcp-server ls -lh /tmp/mcp_*.log*
ls -lh logs/mcp_*.log*
```
**Log Rotation**: All log files are configured with automatic rotation at 20MB to prevent disk space issues. The server keeps:
- 10 rotated files for mcp_server.log (200MB total)
- 5 rotated files for mcp_activity.log (100MB total)
**Why logs don't appear in docker compose logs**: The MCP stdio_server captures stderr during tool execution to prevent interference with the JSON-RPC protocol communication. This means that while you'll see startup logs in `docker compose logs`, you won't see tool execution logs there.
**Why logs appear in files**: The MCP stdio_server captures stderr during tool execution to prevent interference with the JSON-RPC protocol communication. This means tool execution logs are written to files rather than displayed in console output.
#### Running All Simulator Tests
```bash
@@ -65,7 +63,7 @@ python communication_simulator_test.py
# Run with verbose output for debugging
python communication_simulator_test.py --verbose
# Keep Docker logs after tests for inspection
# Keep server logs after tests for inspection
python communication_simulator_test.py --keep-logs
```
@@ -79,7 +77,7 @@ python communication_simulator_test.py --individual basic_conversation
# Examples of available tests:
python communication_simulator_test.py --individual content_validation
python communication_simulator_test.py --individual cross_tool_continuation
python communication_simulator_test.py --individual redis_validation
python communication_simulator_test.py --individual memory_validation
```
#### Other Options
@@ -90,8 +88,6 @@ python communication_simulator_test.py --list-tests
# Run multiple specific tests (not all)
python communication_simulator_test.py --tests basic_conversation content_validation
# Force Docker environment rebuild before running tests
python communication_simulator_test.py --rebuild
```
### Code Quality Checks
@@ -135,11 +131,8 @@ For detailed contribution guidelines, testing requirements, and code quality sta
### Quick Testing Reference
```bash
# Activate virtual environment
source venv/bin/activate
# Run linting checks
ruff check . && black --check . && isort --check-only .
# Run quality checks
./code_quality_checks.sh
# Run unit tests
python -m pytest -xvs

View File

@@ -79,7 +79,7 @@ bug hunting and reduces the chance of wasting precious tokens back and forth.
**Runtime Environment Issues:**
```
"Debug deployment issues with Docker container startup failures, here's the runtime info: [environment details]"
"Debug deployment issues with server startup failures, here's the runtime info: [environment details]"
```
## Debugging Methodology

View File

@@ -56,7 +56,7 @@ The tool displays:
🔹 Custom/Local - ✅ Configured
• local-llama (llama3.2) - 128K context, local inference
• Available at: http://host.docker.internal:11434/v1
• Available at: http://localhost:11434/v1
🔹 OpenRouter - ❌ Not configured
Set OPENROUTER_API_KEY to enable access to Claude, GPT-4, and more models

View File

@@ -42,8 +42,8 @@ The tool provides:
**System Information:**
- Server uptime and status
- Memory and resource usage (if available)
- Connection status with Redis (for conversation memory)
- Docker container information
- Conversation memory status
- Server process information
## Example Output
@@ -58,7 +58,7 @@ The tool provides:
⚙️ Configuration:
• Default Model: auto
• Providers: Google ✅, OpenAI ✅, Custom ✅
• Conversation Memory: Redis
• Conversation Memory: Active
• Web Search: Enabled
🛠️ Available Tools (12):
@@ -77,8 +77,8 @@ The tool provides:
🔍 System Status:
• Server Uptime: 2h 35m
Redis Connection: Active
Docker Container: zen-mcp-server (running)
Memory Storage: Active
Server Process: Running
```
## When to Use Version Tool
@@ -106,7 +106,7 @@ The version tool can help diagnose common issues:
**Performance Troubleshooting:**
- Server uptime and stability
- Resource usage patterns
- Redis connection health
- Memory storage health
## Tool Parameters

View File

@@ -24,7 +24,7 @@ claude.exe --debug
Look for error messages in the console output, especially:
- API key errors
- Docker connection issues
- Python/environment issues
- File permission errors
### 3. Verify API Keys
@@ -40,60 +40,72 @@ cat .env
# OPENAI_API_KEY=your-key-here
```
If you need to update your API keys, edit the `.env` file and then run:
If you need to update your API keys, edit the `.env` file and then restart Claude for changes to take effect.
### 4. Check Server Logs
View the server logs for detailed error information:
```bash
# Restart services
./run-server.sh
# View recent logs
tail -n 100 logs/mcp_server.log
# Or restart and follow logs for troubleshooting
./run-server.sh -f
```
This will validate your configuration and restart the services.
### 4. Check Docker Logs
View the container logs for detailed error information:
```bash
# Check if containers are running
docker-compose ps
# View MCP server logs (recommended - shows actual tool execution)
docker exec zen-mcp-server tail -f -n 500 /tmp/mcp_server.log
# Follow logs in real-time
tail -f logs/mcp_server.log
# Or use the -f flag when starting to automatically follow logs
./run-server.sh -f
```
**Note**: Due to MCP protocol limitations, `docker-compose logs` only shows startup logs, not tool execution logs. Always use the docker exec command above or the `-f` flag for debugging.
# Search for errors
grep "ERROR" logs/mcp_server.log
```
See [Logging Documentation](logging.md) for more details on accessing logs.
### 5. Common Issues
**"Connection failed" in Claude Desktop**
- Ensure Docker is running: `docker ps`
- Restart services: `docker-compose restart`
- Ensure the server path is correct in your Claude config
- Run `./run-server.sh` to verify setup and see configuration
- Check that Python is installed: `python3 --version`
**"API key environment variable is required"**
- Add your API key to the `.env` file
- Run: `./run-server.sh` to validate and restart
- Restart Claude Desktop after updating `.env`
**File path errors**
- Always use absolute paths: `/Users/you/project/file.py`
- Never use relative paths: `./file.py`
### 6. Still Having Issues?
**Python module not found**
- Run `./run-server.sh` to reinstall dependencies
- Check virtual environment is activated: should see `.zen_venv` in the Python path
### 6. Environment Issues
**Virtual Environment Problems**
```bash
# Reset environment completely
rm -rf .zen_venv
./run-server.sh
```
**Permission Issues**
```bash
# Ensure script is executable
chmod +x run-server.sh
```
### 7. Still Having Issues?
If the problem persists after trying these steps:
1. **Reproduce the issue** - Note the exact steps that cause the problem
2. **Collect logs** - Save relevant error messages from Claude debug mode and Docker logs
2. **Collect logs** - Save relevant error messages from Claude debug mode and server logs
3. **Open a GitHub issue** with:
- Your operating system
- Error messages
- Python version: `python3 --version`
- Error messages from logs
- Steps to reproduce
- What you've already tried