Migration from Docker to Standalone Python Server (#73)

* Migration from docker to standalone server
Migration handling
Fixed tests
Use simpler in-memory storage
Support for concurrent logging to disk
Simplified direct connections to localhost

* Migration from docker / redis to standalone script
Updated tests
Updated run script
Fixed requirements
Use dotenv
Ask if user would like to install MCP in Claude Desktop once
Updated docs

* More cleanup and references to docker removed

* Cleanup

* Comments

* Fixed tests

* Fix GitHub Actions workflow for standalone Python architecture

- Install requirements-dev.txt for pytest and testing dependencies
- Remove Docker setup from simulation tests (now standalone)
- Simplify linting job to use requirements-dev.txt
- Update simulation tests to run directly without Docker

Fixes unit test failures in CI due to missing pytest dependency.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Remove simulation tests from GitHub Actions

- Removed simulation-tests job that makes real API calls
- Keep only unit tests (mocked, no API costs) and linting
- Simulation tests should be run manually with real API keys
- Reduces CI costs and complexity

GitHub Actions now only runs:
- Unit tests (569 tests, all mocked)
- Code quality checks (ruff, black)

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fixed tests

* Fixed tests

---------

Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
Beehive Innovations
2025-06-18 23:41:22 +04:00
committed by GitHub
parent 9d72545ecd
commit 4151c3c3a5
121 changed files with 2842 additions and 3168 deletions

View File

@@ -19,11 +19,6 @@ OPENAI_API_KEY=your-openai-key
**Workspace Root:**
```env
# Required: Workspace root directory for file access
WORKSPACE_ROOT=/Users/your-username
```
- Path that contains all files Claude might reference
- Defaults to `$HOME` for direct usage, auto-configured for Docker
### API Keys (At least one required)
@@ -55,15 +50,14 @@ OPENROUTER_API_KEY=your_openrouter_api_key_here
**Option 3: Custom API Endpoints (Local models)**
```env
# For Ollama, vLLM, LM Studio, etc.
# IMPORTANT: Use host.docker.internal, NOT localhost (Docker requirement)
CUSTOM_API_URL=http://host.docker.internal:11434/v1 # Ollama example
CUSTOM_API_URL=http://localhost:11434/v1 # Ollama example
CUSTOM_API_KEY= # Empty for Ollama
CUSTOM_MODEL_NAME=llama3.2 # Default model
```
**Docker Network Requirements:**
- ❌ WRONG: `http://localhost:11434/v1` (Docker containers cannot reach localhost)
- ✅ CORRECT: `http://host.docker.internal:11434/v1` (Docker can reach host services)
**Local Model Connection:**
- Use standard localhost URLs since the server runs natively
- Example: `http://localhost:11434/v1` for Ollama
### Model Configuration
@@ -165,16 +159,12 @@ XAI_ALLOWED_MODELS=grok,grok-3-fast
CUSTOM_MODELS_CONFIG_PATH=/path/to/your/custom_models.json
```
**Redis Configuration:**
```env
# Redis URL for conversation threading (auto-configured for Docker)
REDIS_URL=redis://redis:6379/0
```
**Conversation Settings:**
```env
# How long AI-to-AI conversation threads persist (hours)
CONVERSATION_TIMEOUT_HOURS=3
# How long AI-to-AI conversation threads persist in memory (hours)
# Conversations are auto-purged when claude closes its MCP connection or
# when a session is quit / re-launched
CONVERSATION_TIMEOUT_HOURS=5
# Maximum conversation turns (each exchange = 2 turns)
MAX_CONVERSATION_TURNS=20
@@ -215,7 +205,7 @@ CONVERSATION_TIMEOUT_HOURS=3
```env
# Local models only
DEFAULT_MODEL=llama3.2
CUSTOM_API_URL=http://host.docker.internal:11434/v1
CUSTOM_API_URL=http://localhost:11434/v1
CUSTOM_API_KEY=
CUSTOM_MODEL_NAME=llama3.2
LOG_LEVEL=DEBUG
@@ -232,9 +222,9 @@ LOG_LEVEL=INFO
## Important Notes
**Docker Networking:**
- Always use `host.docker.internal` instead of `localhost` for custom APIs
- The server runs in Docker and cannot access `localhost` directly
**Local Networking:**
- Use standard localhost URLs for local models
- The server runs as a native Python process
**API Key Priority:**
- Native APIs take priority over OpenRouter when both are configured