docs:video

This commit is contained in:
Beehive Innovations
2025-10-06 22:18:50 +04:00
committed by GitHub
parent 775e4d50b8
commit c342d60c30

View File

@@ -32,6 +32,20 @@ The system employs a sophisticated **"newest-first"** approach that ensures opti
- **Presentation Phase**: Reverses to chronological order for natural LLM flow
- When token budget is tight, **older turns are excluded first**
**Show Case**:
The following video demonstartes `continuation` via a casual `continue with gemini...` prompt and the slash command `/continue`.
* We ask Claude code to pick one, then `chat` with `gemini` to make a final decision
* Gemini responds, confirming choice. We use `continuation` to ask another question using the same conversation thread
* Gemini responds with explanation. We use continuation again, using `/zen:continue (MCP)` command the second time
<div style="center">
[Chat With Gemini_web.webm](https://github.com/user-attachments/assets/37bd57ca-e8a6-42f7-b5fb-11de271e95db)
</div>
## Real-World Context Revival Example
Here's how this works in practice with a modern AI/ML workflow:
@@ -93,4 +107,4 @@ This isn't just multi-model access—it's **true AI orchestration** where:
- Claude can coordinate complex multi-step workflows
- Context is never truly lost, just temporarily unavailable to Claude
**This is the closest thing to giving Claude permanent memory for complex development tasks.**
**This is the closest thing to giving Claude permanent memory for complex development tasks.**