fix(providers): omit store parameter for OpenRouter responses endpoint
OpenRouter's /responses endpoint rejects store:true via Zod validation. This is an endpoint-level limitation, not model-specific. The fix conditionally omits the store parameter for OpenRouter while maintaining it for direct OpenAI and Azure OpenAI providers. - Add provider type check in _generate_with_responses_endpoint - Include debug logging when store parameter is omitted - Add regression tests for both OpenRouter and OpenAI behavior Fixes #348
This commit is contained in:
@@ -421,9 +421,17 @@ class OpenAICompatibleProvider(ModelProvider):
|
||||
"model": model_name,
|
||||
"input": input_messages,
|
||||
"reasoning": {"effort": effort},
|
||||
"store": True,
|
||||
}
|
||||
|
||||
# Only include store parameter for providers that support it.
|
||||
# OpenRouter's /responses endpoint rejects store:true via Zod validation (Issue #348).
|
||||
# This is an endpoint-level limitation, not model-specific, so we omit for all
|
||||
# OpenRouter /responses calls. If OpenRouter later supports store, revisit this logic.
|
||||
if self.get_provider_type() != ProviderType.OPENROUTER:
|
||||
completion_params["store"] = True
|
||||
else:
|
||||
logging.debug(f"Omitting 'store' parameter for OpenRouter provider (model: {model_name})")
|
||||
|
||||
# Add max tokens if specified (using max_completion_tokens for responses endpoint)
|
||||
if max_output_tokens:
|
||||
completion_params["max_completion_tokens"] = max_output_tokens
|
||||
|
||||
Reference in New Issue
Block a user