🚀 Major Enhancement: Workflow-Based Tool Architecture v5.5.0 (#95)

* WIP: new workflow architecture

* WIP: further improvements and cleanup

* WIP: cleanup and docks, replace old tool with new

* WIP: cleanup and docks, replace old tool with new

* WIP: new planner implementation using workflow

* WIP: precommit tool working as a workflow instead of a basic tool
Support for passing False to use_assistant_model to skip external models completely and use Claude only

* WIP: precommit workflow version swapped with old

* WIP: codereview

* WIP: replaced codereview

* WIP: replaced codereview

* WIP: replaced refactor

* WIP: workflow for thinkdeep

* WIP: ensure files get embedded correctly

* WIP: thinkdeep replaced with workflow version

* WIP: improved messaging when an external model's response is received

* WIP: analyze tool swapped

* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only

* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only

* WIP: fixed get_completion_next_steps_message missing param

* Fixed tests
Request for files consistently

* Fixed tests
Request for files consistently

* Fixed tests

* New testgen workflow tool
Updated docs

* Swap testgen workflow

* Fix CI test failures by excluding API-dependent tests

- Update GitHub Actions workflow to exclude simulation tests that require API keys
- Fix collaboration tests to properly mock workflow tool expert analysis calls
- Update test assertions to handle new workflow tool response format
- Ensure unit tests run without external API dependencies in CI

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* WIP - Update tests to match new tools

* WIP - Update tests to match new tools

---------

Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
Beehive Innovations
2025-06-21 00:08:11 +04:00
committed by GitHub
parent 4dae6e457e
commit 69a3121452
76 changed files with 17111 additions and 7725 deletions

View File

@@ -6,7 +6,7 @@ from unittest.mock import patch
import pytest
from tools.analyze import AnalyzeTool
from tools.chat import ChatTool
class TestAutoMode:
@@ -65,7 +65,7 @@ class TestAutoMode:
importlib.reload(config)
tool = AnalyzeTool()
tool = ChatTool()
schema = tool.get_input_schema()
# Model should be required
@@ -89,7 +89,7 @@ class TestAutoMode:
"""Test that tool schemas don't require model in normal mode"""
# This test uses the default from conftest.py which sets non-auto mode
# The conftest.py mock_provider_availability fixture ensures the model is available
tool = AnalyzeTool()
tool = ChatTool()
schema = tool.get_input_schema()
# Model should not be required
@@ -114,12 +114,12 @@ class TestAutoMode:
importlib.reload(config)
tool = AnalyzeTool()
tool = ChatTool()
# Mock the provider to avoid real API calls
with patch.object(tool, "get_model_provider"):
# Execute without model parameter
result = await tool.execute({"files": ["/tmp/test.py"], "prompt": "Analyze this"})
result = await tool.execute({"prompt": "Test prompt"})
# Should get error
assert len(result) == 1
@@ -165,7 +165,7 @@ class TestAutoMode:
ModelProviderRegistry._instance = None
tool = AnalyzeTool()
tool = ChatTool()
# Test with real provider resolution - this should attempt to use a model
# that doesn't exist in the OpenAI provider's model list