Add Consensus Tool for Multi-Model Perspective Gathering (#67)
* WIP Refactor resolving mode_names, should be done once at MCP call boundary Pass around model context instead Consensus tool allows one to get a consensus from multiple models, optionally assigning one a 'for' or 'against' stance to find nuanced responses. * Deduplication of model resolution, model_context should be available before reaching deeper parts of the code Improved abstraction when building conversations Throw programmer errors early * Guardrails Support for `model:option` format at MCP boundary so future tools can use additional options if needed instead of handling this only for consensus Model name now supports an optional ":option" for future use * Simplified async flow * Improved model for request to support natural language Simplified async flow * Improved model for request to support natural language Simplified async flow * Fix consensus tool async/sync patterns to match codebase standards CRITICAL FIXES: - Converted _get_consensus_responses from async to sync (matches other tools) - Converted store_conversation_turn from async to sync (add_turn is synchronous) - Removed unnecessary asyncio imports and sleep calls - Fixed ClosedResourceError in MCP protocol during long consensus operations PATTERN ALIGNMENT: - Consensus tool now follows same sync patterns as all other tools - Only execute() and prepare_prompt() are async (base class requirement) - All internal operations are synchronous like analyze, chat, debug, etc. TESTING: - MCP simulation test now passes: consensus_stance ✅ - Two-model consensus works correctly in ~35 seconds - Unknown stance handling defaults to neutral with warnings - All 9 unit tests pass (100% success rate) The consensus tool async patterns were anomalous in the codebase. This fix aligns it with the established synchronous patterns used by all other tools while maintaining full functionality. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * Fixed call order and added new test * Cleanup dead comments Docs for the new tool Improved tests --------- Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
committed by
GitHub
parent
9b98df650b
commit
95556ba9ea
@@ -291,16 +291,22 @@ class TestFileContentPreparation:
|
||||
tool = ThinkDeepTool()
|
||||
tool._current_model_name = "auto"
|
||||
|
||||
# Set up model context to simulate normal execution flow
|
||||
from utils.model_context import ModelContext
|
||||
|
||||
tool._model_context = ModelContext("gemini-2.5-pro-preview-06-05")
|
||||
|
||||
# Call the method
|
||||
content, processed_files = tool._prepare_file_content_for_prompt(["/test/file.py"], None, "test")
|
||||
|
||||
# Check that it logged the correct message
|
||||
debug_calls = [call for call in mock_logger.debug.call_args_list if "Auto mode detected" in str(call)]
|
||||
# Check that it logged the correct message about using model context
|
||||
debug_calls = [call for call in mock_logger.debug.call_args_list if "Using model context" in str(call)]
|
||||
assert len(debug_calls) > 0
|
||||
debug_message = str(debug_calls[0])
|
||||
# Should use a model suitable for extended reasoning
|
||||
assert "gemini-2.5-pro-preview-06-05" in debug_message or "pro" in debug_message
|
||||
assert "extended_reasoning" in debug_message
|
||||
# Should mention the model being used
|
||||
assert "gemini-2.5-pro-preview-06-05" in debug_message
|
||||
# Should mention file tokens (not content tokens)
|
||||
assert "file tokens" in debug_message
|
||||
|
||||
|
||||
class TestProviderHelperMethods:
|
||||
|
||||
Reference in New Issue
Block a user