fix: critical conversation history bug and improve Docker integration

This commit addresses several critical issues and improvements:

🔧 Critical Fixes:
- Fixed conversation history not being included when using continuation_id in AI-to-AI conversations
- Fixed test mock targeting issues preventing proper conversation memory validation
- Fixed Docker debug logging functionality with Gemini tools

🐛 Bug Fixes:
- Docker compose configuration for proper container command execution
- Test mock import targeting from utils.conversation_memory.* to tools.base.*
- Version bump to 3.1.0 reflecting significant improvements

🚀 Improvements:
- Enhanced Docker environment configuration with comprehensive logging setup
- Added cross-tool continuation documentation and examples in README
- Improved error handling and validation across all tools
- Better logging configuration with LOG_LEVEL environment variable support
- Enhanced conversation memory system documentation

🧪 Testing:
- Added comprehensive conversation history bug fix tests
- Added cross-tool continuation functionality tests
- All 132 tests now pass with proper conversation history validation
- Improved test coverage for AI-to-AI conversation threading

 Code Quality:
- Applied black, isort, and ruff formatting across entire codebase
- Enhanced inline documentation for conversation memory system
- Cleaned up temporary files and improved repository hygiene
- Better test descriptions and coverage for critical functionality

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Fahad
2025-06-11 08:53:45 +04:00
parent 14ccbede43
commit 94f542c76a
20 changed files with 1012 additions and 103 deletions

View File

@@ -186,6 +186,7 @@ This server enables **true AI collaboration** between Claude and Gemini, where t
- **Claude can respond** with additional information, files, or refined instructions
- **Claude can work independently** between exchanges - implementing solutions, gathering data, or performing analysis
- **Claude can return to Gemini** with progress updates and new context for further collaboration
- **Cross-tool continuation** - Start with one tool (e.g., `analyze`) and continue with another (e.g., `codereview`) using the same conversation thread
- **Both AIs coordinate their approaches** - questioning assumptions, validating solutions, and building on each other's insights
- Each conversation maintains full context while only sending incremental updates
- Conversations are automatically managed with Redis for persistence
@@ -208,12 +209,27 @@ This server enables **true AI collaboration** between Claude and Gemini, where t
- **Coordinated problem-solving**: Each AI contributes their strengths to complex problems
- **Context building**: Claude gathers information while Gemini provides deep analysis
- **Approach validation**: AIs can verify and improve each other's solutions
- **Cross-tool continuation**: Seamlessly continue conversations across different tools while preserving all context
- **Asynchronous workflow**: Conversations don't need to be sequential - Claude can work on tasks between exchanges, then return to Gemini with additional context and progress updates
- **Incremental updates**: Share only new information in each exchange while maintaining full conversation history
- **Automatic 25K limit bypass**: Each exchange sends only incremental context, allowing unlimited total conversation size
- Up to 5 exchanges per conversation with 1-hour expiry
- Thread-safe with Redis persistence across all tools
**Cross-tool continuation example:**
```
1. Claude: "Use gemini to analyze /src/auth.py for security issues"
→ Gemini analyzes and finds vulnerabilities, provides continuation_id
2. Claude: "Use gemini to review the authentication logic thoroughly"
→ Uses same continuation_id, Gemini sees previous analysis and files
→ Provides detailed code review building on previous findings
3. Claude: "Use gemini to help debug the auth test failures"
→ Same continuation_id, full context from analysis + review
→ Gemini provides targeted debugging with complete understanding
```
## Available Tools
**Quick Tool Selection Guide:**
@@ -837,6 +853,28 @@ Different tools use optimized temperature settings:
- **`TEMPERATURE_BALANCED`**: `0.5` - Used for general chat (balanced creativity/accuracy)
- **`TEMPERATURE_CREATIVE`**: `0.7` - Used for deep thinking and architecture (more creative)
### Logging Configuration
Control logging verbosity via the `LOG_LEVEL` environment variable:
- **`DEBUG`**: Shows detailed operational messages, tool execution flow, conversation threading
- **`INFO`**: Shows general operational messages (default)
- **`WARNING`**: Shows only warnings and errors
- **`ERROR`**: Shows only errors
**Set in your .env file:**
```bash
LOG_LEVEL=DEBUG # For troubleshooting
LOG_LEVEL=INFO # For normal operation (default)
```
**For Docker:**
```bash
# In .env file
LOG_LEVEL=DEBUG
# Or set directly when starting
LOG_LEVEL=DEBUG docker compose up
```
## File Path Requirements