Migration from Docker to Standalone Python Server (#73)

* Migration from docker to standalone server
Migration handling
Fixed tests
Use simpler in-memory storage
Support for concurrent logging to disk
Simplified direct connections to localhost

* Migration from docker / redis to standalone script
Updated tests
Updated run script
Fixed requirements
Use dotenv
Ask if user would like to install MCP in Claude Desktop once
Updated docs

* More cleanup and references to docker removed

* Cleanup

* Comments

* Fixed tests

* Fix GitHub Actions workflow for standalone Python architecture

- Install requirements-dev.txt for pytest and testing dependencies
- Remove Docker setup from simulation tests (now standalone)
- Simplify linting job to use requirements-dev.txt
- Update simulation tests to run directly without Docker

Fixes unit test failures in CI due to missing pytest dependency.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Remove simulation tests from GitHub Actions

- Removed simulation-tests job that makes real API calls
- Keep only unit tests (mocked, no API costs) and linting
- Simulation tests should be run manually with real API keys
- Reduces CI costs and complexity

GitHub Actions now only runs:
- Unit tests (569 tests, all mocked)
- Code quality checks (ruff, black)

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fixed tests

* Fixed tests

---------

Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
Beehive Innovations
2025-06-18 23:41:22 +04:00
committed by GitHub
parent 9d72545ecd
commit 4151c3c3a5
121 changed files with 2842 additions and 3168 deletions

View File

@@ -37,7 +37,7 @@ from utils.conversation_memory import (
get_conversation_file_list,
get_thread,
)
from utils.file_utils import read_file_content, read_files, translate_path_for_environment
from utils.file_utils import read_file_content, read_files
from .models import SPECIAL_STATUS_MODELS, ContinuationOffer, ToolOutput
@@ -1229,15 +1229,13 @@ When recommending searches, be specific about what information you need and why
updated_files = []
for file_path in files:
# Translate path for current environment (Docker/direct)
translated_path = translate_path_for_environment(file_path)
# Check if the filename is exactly "prompt.txt"
# This ensures we don't match files like "myprompt.txt" or "prompt.txt.bak"
if os.path.basename(translated_path) == "prompt.txt":
if os.path.basename(file_path) == "prompt.txt":
try:
# Read prompt.txt content and extract just the text
content, _ = read_file_content(translated_path)
content, _ = read_file_content(file_path)
# Extract the content between the file markers
if "--- BEGIN FILE:" in content and "--- END FILE:" in content:
lines = content.split("\n")
@@ -1568,6 +1566,17 @@ When recommending searches, be specific about what information you need and why
parsed_status = status_model.model_validate(potential_json)
logger.debug(f"{self.name} tool detected special status: {status_key}")
# Enhance mandatory_instructions for files_required_to_continue
if status_key == "files_required_to_continue" and hasattr(
parsed_status, "mandatory_instructions"
):
original_instructions = parsed_status.mandatory_instructions
enhanced_instructions = self._enhance_mandatory_instructions(original_instructions)
# Create a new model instance with enhanced instructions
enhanced_data = parsed_status.model_dump()
enhanced_data["mandatory_instructions"] = enhanced_instructions
parsed_status = status_model.model_validate(enhanced_data)
# Extract model information for metadata
metadata = {
"original_request": (
@@ -1936,7 +1945,7 @@ When recommending searches, be specific about what information you need and why
elif "gpt" in model_name.lower() or "o3" in model_name.lower():
# Register OpenAI provider if not already registered
from providers.base import ProviderType
from providers.openai import OpenAIModelProvider
from providers.openai_provider import OpenAIModelProvider
ModelProviderRegistry.register_provider(ProviderType.OPENAI, OpenAIModelProvider)
provider = ModelProviderRegistry.get_provider(ProviderType.OPENAI)
@@ -1948,3 +1957,28 @@ When recommending searches, be specific about what information you need and why
)
return provider
def _enhance_mandatory_instructions(self, original_instructions: str) -> str:
"""
Enhance mandatory instructions for files_required_to_continue responses.
This adds generic guidance to help Claude understand the importance
of providing the requested files and context.
Args:
original_instructions: The original instructions from the model
Returns:
str: Enhanced instructions with additional guidance
"""
generic_guidance = (
"\n\nIMPORTANT GUIDANCE:\n"
"• The requested files are CRITICAL for providing accurate analysis\n"
"• Please include ALL files mentioned in the files_needed list\n"
"• Use FULL absolute paths to real files/folders - DO NOT SHORTEN paths - and confirm that these exist\n"
"• If you cannot locate specific files or the files are extremely large, think hard, study the code and provide similar/related files that might contain the needed information\n"
"• After providing the files, use the same tool again with the continuation_id to continue the analysis\n"
"• The tool cannot proceed to perform its function accurately without this additional context"
)
return f"{original_instructions}{generic_guidance}"