Fix conversation history duplication and optimize file embedding

This major refactoring addresses critical bugs in conversation history management
and significantly improves token efficiency through intelligent file embedding:

**Key Improvements:**
• Fixed conversation history duplication bug by centralizing reconstruction in server.py
• Added intelligent file filtering to prevent re-embedding files already in conversation history
• Centralized file processing logic in BaseTool._prepare_file_content_for_prompt()
• Enhanced log monitoring with better categorization and file embedding visibility
• Updated comprehensive test suite to verify new architecture and edge cases

**Architecture Changes:**
• Removed duplicate conversation history reconstruction from tools/base.py
• Conversation history now handled exclusively by server.py:reconstruct_thread_context
• All tools now use centralized file processing with automatic deduplication
• Improved token efficiency by embedding unique files only once per conversation

**Performance Benefits:**
• Reduced token usage through smart file filtering
• Eliminated redundant file embeddings in continued conversations
• Better observability with detailed debug logging for file operations

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Fahad
2025-06-11 11:40:12 +04:00
parent 4466d1d1fe
commit 5a94737516
11 changed files with 501 additions and 84 deletions

View File

@@ -9,7 +9,6 @@ from pydantic import Field
from config import TEMPERATURE_ANALYTICAL
from prompts import DEBUG_ISSUE_PROMPT
from utils import read_files
from .base import BaseTool, ToolRequest
from .models import ToolOutput
@@ -159,8 +158,12 @@ class DebugIssueTool(BaseTool):
# Add relevant files if provided
if request.files:
file_content = read_files(request.files)
context_parts.append(f"\n=== RELEVANT CODE ===\n{file_content}\n=== END CODE ===")
# Use centralized file processing logic
continuation_id = getattr(request, "continuation_id", None)
file_content = self._prepare_file_content_for_prompt(request.files, continuation_id, "Code")
if file_content:
context_parts.append(f"\n=== RELEVANT CODE ===\n{file_content}\n=== END CODE ===")
full_context = "\n".join(context_parts)