diff --git a/docs/context-revival.md b/docs/context-revival.md index deffa78..29c36f3 100644 --- a/docs/context-revival.md +++ b/docs/context-revival.md @@ -32,6 +32,20 @@ The system employs a sophisticated **"newest-first"** approach that ensures opti - **Presentation Phase**: Reverses to chronological order for natural LLM flow - When token budget is tight, **older turns are excluded first** +**Show Case**: + +The following video demonstartes `continuation` via a casual `continue with gemini...` prompt and the slash command `/continue`. + +* We ask Claude code to pick one, then `chat` with `gemini` to make a final decision +* Gemini responds, confirming choice. We use `continuation` to ask another question using the same conversation thread +* Gemini responds with explanation. We use continuation again, using `/zen:continue (MCP)` command the second time + +
+ +[Chat With Gemini_web.webm](https://github.com/user-attachments/assets/37bd57ca-e8a6-42f7-b5fb-11de271e95db) + +
+ ## Real-World Context Revival Example Here's how this works in practice with a modern AI/ML workflow: @@ -93,4 +107,4 @@ This isn't just multi-model access—it's **true AI orchestration** where: - Claude can coordinate complex multi-step workflows - Context is never truly lost, just temporarily unavailable to Claude -**This is the closest thing to giving Claude permanent memory for complex development tasks.** \ No newline at end of file +**This is the closest thing to giving Claude permanent memory for complex development tasks.**