feat!: Full code can now be generated by an external model and shared with the AI tool (Claude Code / Codex etc)!

model definitions now support a new `allow_code_generation` flag, only to be used with higher reasoning models such as GPT-5-Pro and-Gemini 2.5-Pro

 When `true`, the `chat` tool can now request the external model to generate a full implementation / update / instructions etc and then share the implementation with the calling agent.

 This effectively allows us to utilize more powerful models such as GPT-5-Pro to generate code for us or entire implementations (which are either API-only or part of the $200 Pro plan from within the ChatGPT app)
This commit is contained in:
Fahad
2025-10-07 18:49:13 +04:00
parent 04f7ce5b03
commit ece8a5ebed
29 changed files with 1008 additions and 122 deletions

View File

@@ -5,7 +5,7 @@ from utils.conversation_memory import get_thread
from utils.storage_backend import get_storage_backend
def test_first_response_persisted_in_conversation_history():
def test_first_response_persisted_in_conversation_history(tmp_path):
"""Ensure the assistant's initial reply is stored for newly created threads."""
# Clear in-memory storage to avoid cross-test contamination
@@ -13,7 +13,7 @@ def test_first_response_persisted_in_conversation_history():
storage._store.clear() # type: ignore[attr-defined]
tool = ChatTool()
request = ChatRequest(prompt="First question?", model="local-llama")
request = ChatRequest(prompt="First question?", model="local-llama", working_directory=str(tmp_path))
response_text = "Here is the initial answer."
# Mimic the first tool invocation (no continuation_id supplied)