Lint
This commit is contained in:
26
server.py
26
server.py
@@ -364,8 +364,8 @@ async def handle_call_tool(name: str, arguments: dict[str, Any]) -> list[TextCon
|
|||||||
"""
|
"""
|
||||||
Handle incoming tool execution requests from MCP clients.
|
Handle incoming tool execution requests from MCP clients.
|
||||||
|
|
||||||
This is the main request dispatcher that routes tool calls to their appropriate handlers.
|
This is the main request dispatcher that routes tool calls to their appropriate handlers.
|
||||||
It supports both AI-powered tools (from TOOLS registry) and utility tools (implemented as
|
It supports both AI-powered tools (from TOOLS registry) and utility tools (implemented as
|
||||||
static functions).
|
static functions).
|
||||||
|
|
||||||
CONVERSATION LIFECYCLE MANAGEMENT:
|
CONVERSATION LIFECYCLE MANAGEMENT:
|
||||||
@@ -373,15 +373,15 @@ async def handle_call_tool(name: str, arguments: dict[str, Any]) -> list[TextCon
|
|||||||
|
|
||||||
1. THREAD RESUMPTION: When continuation_id is present, it reconstructs complete conversation
|
1. THREAD RESUMPTION: When continuation_id is present, it reconstructs complete conversation
|
||||||
context from Redis including conversation history and file references
|
context from Redis including conversation history and file references
|
||||||
|
|
||||||
2. CROSS-TOOL CONTINUATION: Enables seamless handoffs between different tools (analyze →
|
2. CROSS-TOOL CONTINUATION: Enables seamless handoffs between different tools (analyze →
|
||||||
codereview → debug) while preserving full conversation context and file references
|
codereview → debug) while preserving full conversation context and file references
|
||||||
|
|
||||||
3. CONTEXT INJECTION: Reconstructed conversation history is embedded into tool prompts
|
3. CONTEXT INJECTION: Reconstructed conversation history is embedded into tool prompts
|
||||||
using the dual prioritization strategy:
|
using the dual prioritization strategy:
|
||||||
- Files: Newest-first prioritization (recent file versions take precedence)
|
- Files: Newest-first prioritization (recent file versions take precedence)
|
||||||
- Turns: Newest-first collection for token efficiency, chronological presentation for LLM
|
- Turns: Newest-first collection for token efficiency, chronological presentation for LLM
|
||||||
|
|
||||||
4. FOLLOW-UP GENERATION: After tool execution, generates continuation offers for ongoing
|
4. FOLLOW-UP GENERATION: After tool execution, generates continuation offers for ongoing
|
||||||
AI-to-AI collaboration with natural language instructions
|
AI-to-AI collaboration with natural language instructions
|
||||||
|
|
||||||
@@ -531,36 +531,36 @@ async def reconstruct_thread_context(arguments: dict[str, Any]) -> dict[str, Any
|
|||||||
"""
|
"""
|
||||||
Reconstruct conversation context for stateless-to-stateful thread continuation.
|
Reconstruct conversation context for stateless-to-stateful thread continuation.
|
||||||
|
|
||||||
This is a critical function that transforms the inherently stateless MCP protocol into
|
This is a critical function that transforms the inherently stateless MCP protocol into
|
||||||
stateful multi-turn conversations. It loads persistent conversation state from Redis
|
stateful multi-turn conversations. It loads persistent conversation state from Redis
|
||||||
and rebuilds complete conversation context using the sophisticated dual prioritization
|
and rebuilds complete conversation context using the sophisticated dual prioritization
|
||||||
strategy implemented in the conversation memory system.
|
strategy implemented in the conversation memory system.
|
||||||
|
|
||||||
CONTEXT RECONSTRUCTION PROCESS:
|
CONTEXT RECONSTRUCTION PROCESS:
|
||||||
|
|
||||||
1. THREAD RETRIEVAL: Loads complete ThreadContext from Redis using continuation_id
|
1. THREAD RETRIEVAL: Loads complete ThreadContext from Redis using continuation_id
|
||||||
- Includes all conversation turns with tool attribution
|
- Includes all conversation turns with tool attribution
|
||||||
- Preserves file references and cross-tool context
|
- Preserves file references and cross-tool context
|
||||||
- Handles conversation chains across multiple linked threads
|
- Handles conversation chains across multiple linked threads
|
||||||
|
|
||||||
2. CONVERSATION HISTORY BUILDING: Uses build_conversation_history() to create
|
2. CONVERSATION HISTORY BUILDING: Uses build_conversation_history() to create
|
||||||
comprehensive context with intelligent prioritization:
|
comprehensive context with intelligent prioritization:
|
||||||
|
|
||||||
FILE PRIORITIZATION (Newest-First Throughout):
|
FILE PRIORITIZATION (Newest-First Throughout):
|
||||||
- When same file appears in multiple turns, newest reference wins
|
- When same file appears in multiple turns, newest reference wins
|
||||||
- File embedding prioritizes recent versions, excludes older duplicates
|
- File embedding prioritizes recent versions, excludes older duplicates
|
||||||
- Token budget management ensures most relevant files are preserved
|
- Token budget management ensures most relevant files are preserved
|
||||||
|
|
||||||
CONVERSATION TURN PRIORITIZATION (Dual Strategy):
|
CONVERSATION TURN PRIORITIZATION (Dual Strategy):
|
||||||
- Collection Phase: Processes turns newest-to-oldest for token efficiency
|
- Collection Phase: Processes turns newest-to-oldest for token efficiency
|
||||||
- Presentation Phase: Presents turns chronologically for LLM understanding
|
- Presentation Phase: Presents turns chronologically for LLM understanding
|
||||||
- Ensures recent context is preserved when token budget is constrained
|
- Ensures recent context is preserved when token budget is constrained
|
||||||
|
|
||||||
3. CONTEXT INJECTION: Embeds reconstructed history into tool request arguments
|
3. CONTEXT INJECTION: Embeds reconstructed history into tool request arguments
|
||||||
- Conversation history becomes part of the tool's prompt context
|
- Conversation history becomes part of the tool's prompt context
|
||||||
- Files referenced in previous turns are accessible to current tool
|
- Files referenced in previous turns are accessible to current tool
|
||||||
- Cross-tool knowledge transfer is seamless and comprehensive
|
- Cross-tool knowledge transfer is seamless and comprehensive
|
||||||
|
|
||||||
4. TOKEN BUDGET MANAGEMENT: Applies model-specific token allocation
|
4. TOKEN BUDGET MANAGEMENT: Applies model-specific token allocation
|
||||||
- Balances conversation history vs. file content vs. response space
|
- Balances conversation history vs. file content vs. response space
|
||||||
- Gracefully handles token limits with intelligent exclusion strategies
|
- Gracefully handles token limits with intelligent exclusion strategies
|
||||||
|
|||||||
@@ -40,8 +40,8 @@ Key Features:
|
|||||||
- Graceful degradation when Redis is unavailable
|
- Graceful degradation when Redis is unavailable
|
||||||
|
|
||||||
DUAL PRIORITIZATION STRATEGY (Files & Conversations):
|
DUAL PRIORITIZATION STRATEGY (Files & Conversations):
|
||||||
The conversation memory system implements sophisticated prioritization for both files and
|
The conversation memory system implements sophisticated prioritization for both files and
|
||||||
conversation turns, using a consistent "newest-first" approach during collection but
|
conversation turns, using a consistent "newest-first" approach during collection but
|
||||||
presenting information in the optimal format for LLM consumption:
|
presenting information in the optimal format for LLM consumption:
|
||||||
|
|
||||||
FILE PRIORITIZATION (Newest-First Throughout):
|
FILE PRIORITIZATION (Newest-First Throughout):
|
||||||
@@ -64,7 +64,7 @@ CONVERSATION TURN PRIORITIZATION (Newest-First Collection, Chronological Present
|
|||||||
- LLM sees natural conversation flow: "Turn 1 → Turn 2 → Turn 3..."
|
- LLM sees natural conversation flow: "Turn 1 → Turn 2 → Turn 3..."
|
||||||
- Maintains proper sequential understanding while preserving recency prioritization
|
- Maintains proper sequential understanding while preserving recency prioritization
|
||||||
|
|
||||||
This dual approach ensures optimal context preservation (newest-first) with natural
|
This dual approach ensures optimal context preservation (newest-first) with natural
|
||||||
conversation flow (chronological) for maximum LLM comprehension and relevance.
|
conversation flow (chronological) for maximum LLM comprehension and relevance.
|
||||||
|
|
||||||
USAGE EXAMPLE:
|
USAGE EXAMPLE:
|
||||||
@@ -568,19 +568,19 @@ def build_conversation_history(context: ThreadContext, model_context=None, read_
|
|||||||
|
|
||||||
CONVERSATION TURN ORDERING STRATEGY:
|
CONVERSATION TURN ORDERING STRATEGY:
|
||||||
The function employs a sophisticated two-phase approach for optimal token utilization:
|
The function employs a sophisticated two-phase approach for optimal token utilization:
|
||||||
|
|
||||||
PHASE 1 - COLLECTION (Newest-First for Token Budget):
|
PHASE 1 - COLLECTION (Newest-First for Token Budget):
|
||||||
- Processes conversation turns in REVERSE chronological order (newest to oldest)
|
- Processes conversation turns in REVERSE chronological order (newest to oldest)
|
||||||
- Prioritizes recent turns within token constraints
|
- Prioritizes recent turns within token constraints
|
||||||
- If token budget is exceeded, OLDER turns are excluded first
|
- If token budget is exceeded, OLDER turns are excluded first
|
||||||
- Ensures the most contextually relevant recent exchanges are preserved
|
- Ensures the most contextually relevant recent exchanges are preserved
|
||||||
|
|
||||||
PHASE 2 - PRESENTATION (Chronological for LLM Understanding):
|
PHASE 2 - PRESENTATION (Chronological for LLM Understanding):
|
||||||
- Reverses the collected turns back to chronological order (oldest to newest)
|
- Reverses the collected turns back to chronological order (oldest to newest)
|
||||||
- Presents conversation flow naturally for LLM comprehension
|
- Presents conversation flow naturally for LLM comprehension
|
||||||
- Maintains "--- Turn 1, Turn 2, Turn 3..." sequential numbering
|
- Maintains "--- Turn 1, Turn 2, Turn 3..." sequential numbering
|
||||||
- Enables LLM to follow conversation progression logically
|
- Enables LLM to follow conversation progression logically
|
||||||
|
|
||||||
This approach balances recency prioritization with natural conversation flow.
|
This approach balances recency prioritization with natural conversation flow.
|
||||||
|
|
||||||
TOKEN MANAGEMENT:
|
TOKEN MANAGEMENT:
|
||||||
|
|||||||
Reference in New Issue
Block a user