Add Consensus Tool for Multi-Model Perspective Gathering (#67)

* WIP
Refactor resolving mode_names, should be done once at MCP call boundary
Pass around model context instead
Consensus tool allows one to get a consensus from multiple models, optionally assigning one a 'for' or 'against' stance to find nuanced responses.

* Deduplication of model resolution, model_context should be available before reaching deeper parts of the code
Improved abstraction when building conversations
Throw programmer errors early

* Guardrails
Support for `model:option` format at MCP boundary so future tools can use additional options if needed instead of handling this only for consensus
Model name now supports an optional ":option" for future use

* Simplified async flow

* Improved model for request to support natural language
Simplified async flow

* Improved model for request to support natural language
Simplified async flow

* Fix consensus tool async/sync patterns to match codebase standards

CRITICAL FIXES:
- Converted _get_consensus_responses from async to sync (matches other tools)
- Converted store_conversation_turn from async to sync (add_turn is synchronous)
- Removed unnecessary asyncio imports and sleep calls
- Fixed ClosedResourceError in MCP protocol during long consensus operations

PATTERN ALIGNMENT:
- Consensus tool now follows same sync patterns as all other tools
- Only execute() and prepare_prompt() are async (base class requirement)
- All internal operations are synchronous like analyze, chat, debug, etc.

TESTING:
- MCP simulation test now passes: consensus_stance 
- Two-model consensus works correctly in ~35 seconds
- Unknown stance handling defaults to neutral with warnings
- All 9 unit tests pass (100% success rate)

The consensus tool async patterns were anomalous in the codebase.
This fix aligns it with the established synchronous patterns used
by all other tools while maintaining full functionality.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fixed call order and added new test

* Cleanup dead comments
Docs for the new tool
Improved tests

---------

Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
Beehive Innovations
2025-06-17 10:53:17 +04:00
committed by GitHub
parent 9b98df650b
commit 95556ba9ea
31 changed files with 2643 additions and 324 deletions

View File

@@ -884,7 +884,7 @@ def build_conversation_history(context: ThreadContext, model_context=None, read_
history_parts.append("(No accessible files found)")
logger.debug(f"[FILES] No accessible files found from {len(files_to_include)} planned files")
else:
# Fallback to original read_files function for backward compatibility
# Fallback to original read_files function
files_content = read_files_func(all_files)
if files_content:
# Add token validation for the combined file content
@@ -940,14 +940,10 @@ def build_conversation_history(context: ThreadContext, model_context=None, read_
turn_header += ") ---"
turn_parts.append(turn_header)
# Add files context if present - but just reference which files were used
# (the actual contents are already embedded above)
if turn.files:
turn_parts.append(f"Files used in this turn: {', '.join(turn.files)}")
turn_parts.append("") # Empty line for readability
# Add the actual content
turn_parts.append(turn.content)
# Get tool-specific formatting if available
# This includes file references and the actual content
tool_formatted_content = _get_tool_formatted_content(turn)
turn_parts.extend(tool_formatted_content)
# Calculate tokens for this turn
turn_content = "\n".join(turn_parts)
@@ -1019,6 +1015,63 @@ def build_conversation_history(context: ThreadContext, model_context=None, read_
return complete_history, total_conversation_tokens
def _get_tool_formatted_content(turn: ConversationTurn) -> list[str]:
"""
Get tool-specific formatting for a conversation turn.
This function attempts to use the tool's custom formatting method if available,
falling back to default formatting if the tool cannot be found or doesn't
provide custom formatting.
Args:
turn: The conversation turn to format
Returns:
list[str]: Formatted content lines for this turn
"""
if turn.tool_name:
try:
# Dynamically import to avoid circular dependencies
from server import TOOLS
tool = TOOLS.get(turn.tool_name)
if tool and hasattr(tool, "format_conversation_turn"):
# Use tool-specific formatting
return tool.format_conversation_turn(turn)
except Exception as e:
# Log but don't fail - fall back to default formatting
logger.debug(f"[HISTORY] Could not get tool-specific formatting for {turn.tool_name}: {e}")
# Default formatting
return _default_turn_formatting(turn)
def _default_turn_formatting(turn: ConversationTurn) -> list[str]:
"""
Default formatting for conversation turns.
This provides the standard formatting when no tool-specific
formatting is available.
Args:
turn: The conversation turn to format
Returns:
list[str]: Default formatted content lines
"""
parts = []
# Add files context if present
if turn.files:
parts.append(f"Files used in this turn: {', '.join(turn.files)}")
parts.append("") # Empty line for readability
# Add the actual content
parts.append(turn.content)
return parts
def _is_valid_uuid(val: str) -> bool:
"""
Validate UUID format for security