Handle the case where provider is a string
Updated example
This commit is contained in:
@@ -371,7 +371,7 @@ issue, ran `precommit` with o3 again and got:
|
|||||||
|
|
||||||
**RECOMMENDATION: PROCEED WITH COMMIT**
|
**RECOMMENDATION: PROCEED WITH COMMIT**
|
||||||
|
|
||||||
Nice!
|
Nice! This is just one instance - take a look at [another example here](docs/tools/precommit.md).
|
||||||
|
|
||||||
**[📖 Read More](docs/tools/precommit.md)** - Multi-repository validation and change analysis
|
**[📖 Read More](docs/tools/precommit.md)** - Multi-repository validation and change analysis
|
||||||
|
|
||||||
|
|||||||
@@ -45,6 +45,57 @@ logic, everything should work as expected
|
|||||||
|
|
||||||
How beautiful is that? Claude used `precommit` twice and `codereview` once and actually found and fixed two critical errors before commit!
|
How beautiful is that? Claude used `precommit` twice and `codereview` once and actually found and fixed two critical errors before commit!
|
||||||
|
|
||||||
|
### Real-world Example
|
||||||
|
|
||||||
|
Here's an example where Claude was made to go through changes *it* was asked to make. Its recommendation
|
||||||
|
to **Publish all Changes** was made after a rigorous set of prompts where it examined the small set of changes
|
||||||
|
from different angles (performance, bugs, anti-patterns etc). After confirming that unit tests were
|
||||||
|
passing, it went ahead with an initial review with O3; came back clean - all good to go. O3 was happy, everything looked great.
|
||||||
|
|
||||||
|
Then it shared the changes and relevant code with Gemini 2.5 Pro - the following is the outcome:
|
||||||
|
|
||||||
|
```
|
||||||
|
⏺ zen:precommit (MCP)(step: "FINAL VALIDATION COMPLETE. All changes have been thoroughly analyzed and are ready for
|
||||||
|
commit. The modifications successfully fix 4 GitHub CI test failures with high-quality, defensive
|
||||||
|
implementations that maintain architectural integrity and coding standards. No issues or regressions
|
||||||
|
detected.", step_number: 4, total_steps: 4, next_step_required: false, findings: "COMMIT
|
||||||
|
RECOMMENDATION: ✅ APPROVED\n\nSUMMARY OF CHANGES:\n✅ Fixed 4 GitHub CI test failures:\n -
|
||||||
|
test_openrouter_with_restrictions (Mock iteration issue) \n - test_execute_with_branching
|
||||||
|
(Missing metadata)\n - test_execute_with_revision (Missing metadata)\n - test_metadata_preserved
|
||||||
|
(Provider type AttributeError)\n\n✅ IMPLEMENTATION QUALITY VERIFIED:\n - Defensive programming
|
||||||
|
with proper type checking\n - Error handling with meaningful fallbacks\n - Backward
|
||||||
|
compatibility maintained\n - Coding standards followed (no hasattr/getattr in changes)\n -
|
||||||
|
Architectural patterns preserved\n\n✅ COMPREHENSIVE TESTING:\n - All originally failing tests
|
||||||
|
pass\n - Affected functionality: 37/37 tests pass\n - Provider systems: 16/16 tests pass \n -
|
||||||
|
Module imports work correctly\n - No regressions detected\n\n✅ CHANGE IMPACT ASSESSMENT:\n -
|
||||||
|
Minimal, targeted changes\n - No security or performance concerns\n - Maintains API
|
||||||
|
compatibility\n - Changes are well-documented with clear comments\n\nRECOMMENDATION: These changes
|
||||||
|
are safe to commit and will resolve the GitHub CI failures without any negative impact.",
|
||||||
|
⎿ {
|
||||||
|
"status": "calling_expert_analysis",
|
||||||
|
"step_number": 4,
|
||||||
|
… +169 lines (ctrl+r to expand)
|
||||||
|
|
||||||
|
⏺ 🔍 Pre-Commit Validation Complete
|
||||||
|
|
||||||
|
Based on comprehensive analysis using both O3 and Gemini 2.5 Pro models, I can provide the following assessment:
|
||||||
|
|
||||||
|
✅ APPROVED WITH CRITICAL FIX REQUIRED
|
||||||
|
|
||||||
|
The changes successfully fix all 4 GitHub CI test failures and are well-implemented with defensive programming
|
||||||
|
patterns. However, the expert analysis identified one critical issue that must be addressed before commit:
|
||||||
|
|
||||||
|
🚨 [HIGH PRIORITY] Critical Issue Found
|
||||||
|
|
||||||
|
Inconsistent Provider Handling in tools/base.py
|
||||||
|
- Location: tools/base.py line ~1833 in _create_continuation_offer_response method
|
||||||
|
- Problem: The defensive provider handling pattern was applied in 2 locations but missed a 3rd location
|
||||||
|
- Impact: Risk of AttributeError: 'str' object has no attribute 'get_provider_type' in continuation scenarios
|
||||||
|
- Fix Required: Apply the same defensive pattern to the missed location
|
||||||
|
```
|
||||||
|
|
||||||
|
A subtle but critical issue spotted by Gemini that both Claude + O3 missed. This is the power of running these workflows with Zen.
|
||||||
|
|
||||||
## Example Prompts
|
## Example Prompts
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -170,7 +170,7 @@ class TestOpenRouterAutoMode:
|
|||||||
assert len(available_models) > 0, "Should find OpenRouter models in auto mode"
|
assert len(available_models) > 0, "Should find OpenRouter models in auto mode"
|
||||||
assert all(provider_type == ProviderType.OPENROUTER for provider_type in available_models.values())
|
assert all(provider_type == ProviderType.OPENROUTER for provider_type in available_models.values())
|
||||||
|
|
||||||
expected_models = mock_registry.list_models()
|
expected_models = mock_registry.list_models.return_value
|
||||||
for model in expected_models:
|
for model in expected_models:
|
||||||
assert model in available_models, f"Model {model} should be available"
|
assert model in available_models, f"Model {model} should be available"
|
||||||
|
|
||||||
@@ -190,12 +190,18 @@ class TestOpenRouterAutoMode:
|
|||||||
utils.model_restrictions._restriction_service = None
|
utils.model_restrictions._restriction_service = None
|
||||||
|
|
||||||
mock_registry = Mock()
|
mock_registry = Mock()
|
||||||
mock_registry.list_models.return_value = [
|
mock_models = [
|
||||||
"google/gemini-2.5-flash",
|
"google/gemini-2.5-flash",
|
||||||
"google/gemini-2.5-pro",
|
"google/gemini-2.5-pro",
|
||||||
"anthropic/claude-3-opus",
|
"anthropic/claude-3-opus",
|
||||||
"anthropic/claude-3-sonnet",
|
"anthropic/claude-3-sonnet",
|
||||||
]
|
]
|
||||||
|
mock_registry.list_models.return_value = mock_models
|
||||||
|
|
||||||
|
# Mock the resolve method to return model configs with aliases
|
||||||
|
mock_model_config = Mock()
|
||||||
|
mock_model_config.aliases = [] # Empty aliases for simplicity
|
||||||
|
mock_registry.resolve.return_value = mock_model_config
|
||||||
|
|
||||||
ModelProviderRegistry.register_provider(ProviderType.OPENROUTER, OpenRouterProvider)
|
ModelProviderRegistry.register_provider(ProviderType.OPENROUTER, OpenRouterProvider)
|
||||||
|
|
||||||
|
|||||||
@@ -1640,7 +1640,15 @@ When recommending searches, be specific about what information you need and why
|
|||||||
# TEST COVERAGE: tests/test_provider_routing_bugs.py::TestProviderMetadataBug
|
# TEST COVERAGE: tests/test_provider_routing_bugs.py::TestProviderMetadataBug
|
||||||
provider = model_info.get("provider")
|
provider = model_info.get("provider")
|
||||||
if provider:
|
if provider:
|
||||||
metadata["provider_used"] = provider.get_provider_type().value
|
# Handle both provider objects and string values
|
||||||
|
if isinstance(provider, str):
|
||||||
|
metadata["provider_used"] = provider
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
metadata["provider_used"] = provider.get_provider_type().value
|
||||||
|
except AttributeError:
|
||||||
|
# Fallback if provider doesn't have get_provider_type method
|
||||||
|
metadata["provider_used"] = str(provider)
|
||||||
|
|
||||||
return ToolOutput(
|
return ToolOutput(
|
||||||
status=status_key,
|
status=status_key,
|
||||||
@@ -1684,7 +1692,15 @@ When recommending searches, be specific about what information you need and why
|
|||||||
if model_info:
|
if model_info:
|
||||||
provider = model_info.get("provider")
|
provider = model_info.get("provider")
|
||||||
if provider:
|
if provider:
|
||||||
model_provider = provider.get_provider_type().value
|
# Handle both provider objects and string values
|
||||||
|
if isinstance(provider, str):
|
||||||
|
model_provider = provider
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
model_provider = provider.get_provider_type().value
|
||||||
|
except AttributeError:
|
||||||
|
# Fallback if provider doesn't have get_provider_type method
|
||||||
|
model_provider = str(provider)
|
||||||
model_name = model_info.get("model_name")
|
model_name = model_info.get("model_name")
|
||||||
model_response = model_info.get("model_response")
|
model_response = model_info.get("model_response")
|
||||||
if model_response:
|
if model_response:
|
||||||
@@ -1721,7 +1737,15 @@ When recommending searches, be specific about what information you need and why
|
|||||||
# FEATURE: Add provider_used metadata (Added for Issue #98)
|
# FEATURE: Add provider_used metadata (Added for Issue #98)
|
||||||
provider = model_info.get("provider")
|
provider = model_info.get("provider")
|
||||||
if provider:
|
if provider:
|
||||||
metadata["provider_used"] = provider.get_provider_type().value
|
# Handle both provider objects and string values
|
||||||
|
if isinstance(provider, str):
|
||||||
|
metadata["provider_used"] = provider
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
metadata["provider_used"] = provider.get_provider_type().value
|
||||||
|
except AttributeError:
|
||||||
|
# Fallback if provider doesn't have get_provider_type method
|
||||||
|
metadata["provider_used"] = str(provider)
|
||||||
|
|
||||||
return ToolOutput(
|
return ToolOutput(
|
||||||
status="success",
|
status="success",
|
||||||
@@ -1814,7 +1838,15 @@ When recommending searches, be specific about what information you need and why
|
|||||||
if model_info:
|
if model_info:
|
||||||
provider = model_info.get("provider")
|
provider = model_info.get("provider")
|
||||||
if provider:
|
if provider:
|
||||||
model_provider = provider.get_provider_type().value
|
# Handle both provider objects and string values
|
||||||
|
if isinstance(provider, str):
|
||||||
|
model_provider = provider
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
model_provider = provider.get_provider_type().value
|
||||||
|
except AttributeError:
|
||||||
|
# Fallback if provider doesn't have get_provider_type method
|
||||||
|
model_provider = str(provider)
|
||||||
model_name = model_info.get("model_name")
|
model_name = model_info.get("model_name")
|
||||||
model_response = model_info.get("model_response")
|
model_response = model_info.get("model_response")
|
||||||
if model_response:
|
if model_response:
|
||||||
@@ -1860,7 +1892,15 @@ When recommending searches, be specific about what information you need and why
|
|||||||
# FEATURE: Add provider_used metadata (Added for Issue #98)
|
# FEATURE: Add provider_used metadata (Added for Issue #98)
|
||||||
provider = model_info.get("provider")
|
provider = model_info.get("provider")
|
||||||
if provider:
|
if provider:
|
||||||
metadata["provider_used"] = provider.get_provider_type().value
|
# Handle both provider objects and string values
|
||||||
|
if isinstance(provider, str):
|
||||||
|
metadata["provider_used"] = provider
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
metadata["provider_used"] = provider.get_provider_type().value
|
||||||
|
except AttributeError:
|
||||||
|
# Fallback if provider doesn't have get_provider_type method
|
||||||
|
metadata["provider_used"] = str(provider)
|
||||||
|
|
||||||
return ToolOutput(
|
return ToolOutput(
|
||||||
status="continuation_available",
|
status="continuation_available",
|
||||||
@@ -1883,7 +1923,15 @@ When recommending searches, be specific about what information you need and why
|
|||||||
# FEATURE: Add provider_used metadata (Added for Issue #98)
|
# FEATURE: Add provider_used metadata (Added for Issue #98)
|
||||||
provider = model_info.get("provider")
|
provider = model_info.get("provider")
|
||||||
if provider:
|
if provider:
|
||||||
metadata["provider_used"] = provider.get_provider_type().value
|
# Handle both provider objects and string values
|
||||||
|
if isinstance(provider, str):
|
||||||
|
metadata["provider_used"] = provider
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
metadata["provider_used"] = provider.get_provider_type().value
|
||||||
|
except AttributeError:
|
||||||
|
# Fallback if provider doesn't have get_provider_type method
|
||||||
|
metadata["provider_used"] = str(provider)
|
||||||
|
|
||||||
return ToolOutput(
|
return ToolOutput(
|
||||||
status="success",
|
status="success",
|
||||||
|
|||||||
@@ -445,9 +445,23 @@ class PlannerTool(WorkflowTool):
|
|||||||
step_data = self.prepare_step_data(request)
|
step_data = self.prepare_step_data(request)
|
||||||
self.branches[request.branch_id].append(step_data)
|
self.branches[request.branch_id].append(step_data)
|
||||||
|
|
||||||
# Update metadata to reflect the new branch
|
# Ensure metadata exists and preserve existing metadata from build_base_response
|
||||||
if "metadata" in response_data:
|
if "metadata" not in response_data:
|
||||||
response_data["metadata"]["branches"] = list(self.branches.keys())
|
response_data["metadata"] = {}
|
||||||
|
|
||||||
|
# Store planner-specific metadata that should persist through workflow metadata addition
|
||||||
|
planner_metadata = {
|
||||||
|
"branches": list(self.branches.keys()),
|
||||||
|
"is_step_revision": request.is_step_revision or False,
|
||||||
|
"revises_step_number": request.revises_step_number,
|
||||||
|
"is_branch_point": request.is_branch_point or False,
|
||||||
|
"branch_from_step": request.branch_from_step,
|
||||||
|
"branch_id": request.branch_id,
|
||||||
|
"more_steps_needed": request.more_steps_needed or False,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Update metadata while preserving existing values
|
||||||
|
response_data["metadata"].update(planner_metadata)
|
||||||
|
|
||||||
# Add planner-specific output instructions for final steps
|
# Add planner-specific output instructions for final steps
|
||||||
if not request.next_step_required:
|
if not request.next_step_required:
|
||||||
|
|||||||
@@ -1083,8 +1083,10 @@ class BaseWorkflowMixin(ABC):
|
|||||||
"provider_used": provider_name,
|
"provider_used": provider_name,
|
||||||
}
|
}
|
||||||
|
|
||||||
# Add metadata to response
|
# Preserve existing metadata and add workflow metadata
|
||||||
response_data["metadata"] = metadata
|
if "metadata" not in response_data:
|
||||||
|
response_data["metadata"] = {}
|
||||||
|
response_data["metadata"].update(metadata)
|
||||||
|
|
||||||
logger.debug(
|
logger.debug(
|
||||||
f"[WORKFLOW_METADATA] {self.get_name()}: Added metadata - "
|
f"[WORKFLOW_METADATA] {self.get_name()}: Added metadata - "
|
||||||
@@ -1102,7 +1104,10 @@ class BaseWorkflowMixin(ABC):
|
|||||||
"provider_used": "unknown",
|
"provider_used": "unknown",
|
||||||
}
|
}
|
||||||
|
|
||||||
response_data["metadata"] = metadata
|
# Preserve existing metadata and add workflow metadata
|
||||||
|
if "metadata" not in response_data:
|
||||||
|
response_data["metadata"] = {}
|
||||||
|
response_data["metadata"].update(metadata)
|
||||||
|
|
||||||
logger.debug(
|
logger.debug(
|
||||||
f"[WORKFLOW_METADATA] {self.get_name()}: Added fallback metadata - "
|
f"[WORKFLOW_METADATA] {self.get_name()}: Added fallback metadata - "
|
||||||
|
|||||||
Reference in New Issue
Block a user