🚀 Major Enhancement: Workflow-Based Tool Architecture v5.5.0 (#95)

* WIP: new workflow architecture

* WIP: further improvements and cleanup

* WIP: cleanup and docks, replace old tool with new

* WIP: cleanup and docks, replace old tool with new

* WIP: new planner implementation using workflow

* WIP: precommit tool working as a workflow instead of a basic tool
Support for passing False to use_assistant_model to skip external models completely and use Claude only

* WIP: precommit workflow version swapped with old

* WIP: codereview

* WIP: replaced codereview

* WIP: replaced codereview

* WIP: replaced refactor

* WIP: workflow for thinkdeep

* WIP: ensure files get embedded correctly

* WIP: thinkdeep replaced with workflow version

* WIP: improved messaging when an external model's response is received

* WIP: analyze tool swapped

* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only

* WIP: updated tests
* Extract only the content when building history
* Use "relevant_files" for workflow tools only

* WIP: fixed get_completion_next_steps_message missing param

* Fixed tests
Request for files consistently

* Fixed tests
Request for files consistently

* Fixed tests

* New testgen workflow tool
Updated docs

* Swap testgen workflow

* Fix CI test failures by excluding API-dependent tests

- Update GitHub Actions workflow to exclude simulation tests that require API keys
- Fix collaboration tests to properly mock workflow tool expert analysis calls
- Update test assertions to handle new workflow tool response format
- Ensure unit tests run without external API dependencies in CI

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* WIP - Update tests to match new tools

* WIP - Update tests to match new tools

---------

Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
Beehive Innovations
2025-06-21 00:08:11 +04:00
committed by GitHub
parent 4dae6e457e
commit 69a3121452
76 changed files with 17111 additions and 7725 deletions

View File

@@ -691,6 +691,65 @@ class BaseTool(ABC):
return parts
def _extract_clean_content_for_history(self, formatted_content: str) -> str:
"""
Extract clean content suitable for conversation history storage.
This method removes internal metadata, continuation offers, and other
tool-specific formatting that should not appear in conversation history
when passed to expert models or other tools.
Args:
formatted_content: The full formatted response from the tool
Returns:
str: Clean content suitable for conversation history storage
"""
try:
# Try to parse as JSON first (for structured responses)
import json
response_data = json.loads(formatted_content)
# If it's a ToolOutput-like structure, extract just the content
if isinstance(response_data, dict) and "content" in response_data:
# Remove continuation_offer and other metadata fields
clean_data = {
"content": response_data.get("content", ""),
"status": response_data.get("status", "success"),
"content_type": response_data.get("content_type", "text"),
}
return json.dumps(clean_data, indent=2)
else:
# For non-ToolOutput JSON, return as-is but ensure no continuation_offer
if "continuation_offer" in response_data:
clean_data = {k: v for k, v in response_data.items() if k != "continuation_offer"}
return json.dumps(clean_data, indent=2)
return formatted_content
except (json.JSONDecodeError, TypeError):
# Not JSON, treat as plain text
# Remove any lines that contain continuation metadata
lines = formatted_content.split("\n")
clean_lines = []
for line in lines:
# Skip lines containing internal metadata patterns
if any(
pattern in line.lower()
for pattern in [
"continuation_id",
"remaining_turns",
"suggested_tool_params",
"if you'd like to continue",
"continuation available",
]
):
continue
clean_lines.append(line)
return "\n".join(clean_lines).strip()
def _prepare_file_content_for_prompt(
self,
request_files: list[str],
@@ -972,6 +1031,26 @@ When recommending searches, be specific about what information you need and why
f"Please provide the full absolute path starting with '/' (must be FULL absolute paths to real files / folders - DO NOT SHORTEN)"
)
# Check if request has 'files_checked' attribute (used by workflow tools)
if hasattr(request, "files_checked") and request.files_checked:
for file_path in request.files_checked:
if not os.path.isabs(file_path):
return (
f"Error: All file paths must be FULL absolute paths to real files / folders - DO NOT SHORTEN. "
f"Received relative path: {file_path}\n"
f"Please provide the full absolute path starting with '/' (must be FULL absolute paths to real files / folders - DO NOT SHORTEN)"
)
# Check if request has 'relevant_files' attribute (used by workflow tools)
if hasattr(request, "relevant_files") and request.relevant_files:
for file_path in request.relevant_files:
if not os.path.isabs(file_path):
return (
f"Error: All file paths must be FULL absolute paths to real files / folders - DO NOT SHORTEN. "
f"Received relative path: {file_path}\n"
f"Please provide the full absolute path starting with '/' (must be FULL absolute paths to real files / folders - DO NOT SHORTEN)"
)
# Check if request has 'path' attribute (used by review_changes tool)
if hasattr(request, "path") and request.path:
if not os.path.isabs(request.path):
@@ -1605,10 +1684,13 @@ When recommending searches, be specific about what information you need and why
if model_response:
model_metadata = {"usage": model_response.usage, "metadata": model_response.metadata}
# CRITICAL: Store clean content for conversation history (exclude internal metadata)
clean_content = self._extract_clean_content_for_history(formatted_content)
success = add_turn(
continuation_id,
"assistant",
formatted_content,
clean_content, # Use cleaned content instead of full formatted response
files=request_files,
images=request_images,
tool_name=self.name,
@@ -1728,10 +1810,13 @@ When recommending searches, be specific about what information you need and why
if model_response:
model_metadata = {"usage": model_response.usage, "metadata": model_response.metadata}
# CRITICAL: Store clean content for conversation history (exclude internal metadata)
clean_content = self._extract_clean_content_for_history(content)
add_turn(
thread_id,
"assistant",
content,
clean_content, # Use cleaned content instead of full formatted response
files=request_files,
images=request_images,
tool_name=self.name,