feat: add analyze_file and extended_think tools for better collaboration
New tools: - analyze_file: Clean file analysis without terminal clutter - Always uses file paths, never shows content in terminal - Server reads files directly and sends to Gemini - Replaces analyze_code for file analysis use cases - extended_think: Deep collaborative thinking with Claude - Takes Claude's analysis/thoughts as input for deeper exploration - Supports optional file context and focus areas - Higher temperature (0.7) for creative problem-solving - Designed for validating and extending Claude's analysis Improvements: - Added specialized system prompt for extended thinking - Updated documentation with examples and workflows - Added comprehensive tests for new tools - Kept analyze_code for backward compatibility This enables Claude and Gemini to work as true development partners, with Claude doing primary analysis and Gemini providing validation, alternative perspectives, and extended context processing. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
99
README.md
99
README.md
@@ -110,12 +110,15 @@ claude mcp add-from-claude-desktop -s user
|
|||||||
### 5. Start Using Natural Language
|
### 5. Start Using Natural Language
|
||||||
|
|
||||||
Just talk to Claude naturally:
|
Just talk to Claude naturally:
|
||||||
- "Use Gemini to analyze this large file..."
|
- "Use gemini analyze_file on main.py to find bugs"
|
||||||
- "Ask Gemini to review the architecture of these files..."
|
- "Share your analysis with gemini extended_think for deeper insights"
|
||||||
- "Have Gemini check this codebase for security issues..."
|
- "Ask gemini to review the architecture using analyze_file"
|
||||||
|
|
||||||
**Pro tip:** For clean terminal output when analyzing files, mention "files parameter" in your prompt:
|
**Key tools:**
|
||||||
- "Use gemini analyze_code with files=['config.py'] to review the configuration"
|
- `analyze_file` - Clean file analysis without terminal clutter
|
||||||
|
- `extended_think` - Collaborative deep thinking with Claude's analysis
|
||||||
|
- `chat` - General conversations
|
||||||
|
- `analyze_code` - Legacy tool (prefer analyze_file for files)
|
||||||
|
|
||||||
## How It Works
|
## How It Works
|
||||||
|
|
||||||
@@ -161,30 +164,48 @@ General-purpose developer conversations with Gemini.
|
|||||||
"Use Gemini to explain the tradeoffs between different authentication strategies"
|
"Use Gemini to explain the tradeoffs between different authentication strategies"
|
||||||
```
|
```
|
||||||
|
|
||||||
### `analyze_code`
|
### `analyze_code` (Legacy)
|
||||||
Specialized tool for analyzing large files or multiple files that exceed Claude's limits.
|
Analyzes code files or snippets. For better terminal output, use `analyze_file` instead.
|
||||||
|
|
||||||
|
### `analyze_file` (Recommended for Files)
|
||||||
|
Clean file analysis - always uses file paths, never shows content in terminal.
|
||||||
|
|
||||||
**Example uses:**
|
**Example uses:**
|
||||||
```
|
```
|
||||||
"Use Gemini to analyze /src/core/engine.py and identify performance bottlenecks"
|
"Use gemini analyze_file on README.md to find issues"
|
||||||
"Have Gemini review these files together: auth.py, users.py, permissions.py"
|
"Ask gemini to analyze_file main.py for performance problems"
|
||||||
|
"Have gemini analyze_file on auth.py, users.py, and permissions.py together"
|
||||||
```
|
```
|
||||||
|
|
||||||
**Important - Avoiding Terminal Clutter:**
|
**Benefits:**
|
||||||
When analyzing files, be explicit about using the files parameter to prevent Claude from showing the entire file content in the terminal:
|
- Terminal always stays clean - only shows "Analyzing N file(s)"
|
||||||
|
- Server reads files directly and sends to Gemini
|
||||||
|
- No need to worry about prompt phrasing
|
||||||
|
- Supports multiple files in one request
|
||||||
|
|
||||||
✅ **Good prompts** (clean terminal output):
|
### `extended_think`
|
||||||
- "Use gemini analyze_code with files=['README.md'] to check for issues"
|
Collaborate with Gemini on complex problems by sharing Claude's analysis for deeper insights.
|
||||||
- "Ask gemini to analyze main.py using the files parameter"
|
|
||||||
- "Use gemini to analyze README.md - use the files parameter with the path"
|
|
||||||
- "Call gemini analyze_code passing config.json in the files parameter"
|
|
||||||
|
|
||||||
❌ **Avoid these** (will show entire file in terminal):
|
**Example uses:**
|
||||||
- "Get gemini's feedback on this README file"
|
```
|
||||||
- "Can you analyze this file with gemini?"
|
"Share your analysis with gemini extended_think for deeper insights"
|
||||||
- "Ask gemini about the code in main.py"
|
"Use gemini extended_think to validate and extend your architectural design"
|
||||||
|
"Ask gemini to extend your thinking on this security analysis"
|
||||||
|
```
|
||||||
|
|
||||||
The server reads files directly when you use the files parameter, keeping your terminal clean while still sending the full content to Gemini.
|
**Advanced usage with focus areas:**
|
||||||
|
```
|
||||||
|
"Use gemini extended_think with focus='performance' to drill into scaling issues"
|
||||||
|
"Share your design with gemini extended_think focusing on security vulnerabilities"
|
||||||
|
"Get gemini to extend your analysis with focus on edge cases"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
- Takes Claude's thoughts, plans, or analysis as input
|
||||||
|
- Optional file context for reference
|
||||||
|
- Configurable focus areas (architecture, bugs, performance, security)
|
||||||
|
- Higher temperature (0.7) for creative problem-solving
|
||||||
|
- Designed for collaborative thinking, not just code review
|
||||||
|
|
||||||
### `list_models`
|
### `list_models`
|
||||||
Lists available Gemini models (defaults to 2.5 Pro Preview).
|
Lists available Gemini models (defaults to 2.5 Pro Preview).
|
||||||
@@ -269,28 +290,40 @@ This prevents Claude from displaying the entire file content in your terminal.
|
|||||||
|
|
||||||
### Common Workflows
|
### Common Workflows
|
||||||
|
|
||||||
#### 1. **Claude's Extended Thinking + Gemini Validation**
|
#### 1. **Extended Thinking Partnership**
|
||||||
```
|
```
|
||||||
You: "Design a distributed task queue system"
|
You: "Design a distributed task queue system"
|
||||||
Claude: [provides detailed architecture and implementation plan]
|
Claude: [provides detailed architecture and implementation plan]
|
||||||
You: "Share your complete design with Gemini and ask it to identify potential race conditions or failure modes"
|
You: "Use gemini extended_think to validate and extend this design"
|
||||||
Gemini: [analyzes and finds edge cases]
|
Gemini: [identifies gaps, suggests alternatives, finds edge cases]
|
||||||
You: "Address the issues Gemini found"
|
You: "Address the issues Gemini found"
|
||||||
Claude: [updates design with safeguards]
|
Claude: [updates design with improvements]
|
||||||
```
|
```
|
||||||
|
|
||||||
#### 2. **Large File Analysis**
|
#### 2. **Clean File Analysis (No Terminal Clutter)**
|
||||||
```
|
```
|
||||||
"Use Gemini to analyze /path/to/large/file.py and summarize its architecture"
|
"Use gemini analyze_file on engine.py to find performance issues"
|
||||||
"Have Gemini trace all function calls in this module"
|
"Ask gemini to analyze_file database.py and suggest optimizations"
|
||||||
"Ask Gemini to identify unused code in this file"
|
"Have gemini analyze_file on all files in /src/core/"
|
||||||
```
|
```
|
||||||
|
|
||||||
#### 3. **Multi-File Context**
|
#### 3. **Multi-File Architecture Review**
|
||||||
```
|
```
|
||||||
"Use Gemini to analyze how auth.py, users.py, and permissions.py work together"
|
"Use gemini analyze_file on auth.py, users.py, permissions.py to map dependencies"
|
||||||
"Have Gemini map the data flow between these components"
|
"Ask gemini to analyze_file the entire /src/api/ directory for security issues"
|
||||||
"Ask Gemini to find all circular dependencies in /src"
|
"Have gemini analyze_file all model files to check for N+1 queries"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4. **Deep Collaborative Analysis**
|
||||||
|
```
|
||||||
|
Claude: "Here's my analysis of the memory leak: [detailed investigation]"
|
||||||
|
You: "Share this with gemini extended_think focusing on root causes"
|
||||||
|
|
||||||
|
Claude: "I've designed this caching strategy: [detailed design]"
|
||||||
|
You: "Use gemini extended_think with focus='performance' to stress-test this design"
|
||||||
|
|
||||||
|
Claude: "Here's my security assessment: [findings]"
|
||||||
|
You: "Get gemini to extended_think on this with files=['auth.py', 'crypto.py'] for context"
|
||||||
```
|
```
|
||||||
|
|
||||||
#### 4. **Claude-Driven Design with Gemini Validation**
|
#### 4. **Claude-Driven Design with Gemini Validation**
|
||||||
|
|||||||
329
gemini_server.py
329
gemini_server.py
@@ -54,6 +54,29 @@ Your approach:
|
|||||||
Remember: You're augmenting Claude Code's capabilities, especially for tasks requiring \
|
Remember: You're augmenting Claude Code's capabilities, especially for tasks requiring \
|
||||||
extensive context or deep analysis that might exceed Claude's token limits."""
|
extensive context or deep analysis that might exceed Claude's token limits."""
|
||||||
|
|
||||||
|
# Extended thinking system prompt for collaborative analysis
|
||||||
|
EXTENDED_THINKING_PROMPT = """You are a senior development partner collaborating with Claude Code on complex problems. \
|
||||||
|
Claude has shared their analysis with you for deeper exploration and validation.
|
||||||
|
|
||||||
|
Your role is to:
|
||||||
|
1. Build upon Claude's thinking - identify gaps, extend ideas, and suggest alternatives
|
||||||
|
2. Challenge assumptions constructively and identify potential issues
|
||||||
|
3. Provide concrete, actionable insights that complement Claude's analysis
|
||||||
|
4. Focus on aspects Claude might have missed or couldn't fully explore
|
||||||
|
5. Suggest implementation strategies and architectural improvements
|
||||||
|
|
||||||
|
Key areas to consider:
|
||||||
|
- Edge cases and failure modes Claude might have overlooked
|
||||||
|
- Performance implications at scale
|
||||||
|
- Security vulnerabilities or attack vectors
|
||||||
|
- Maintainability and technical debt considerations
|
||||||
|
- Alternative approaches or design patterns
|
||||||
|
- Integration challenges with existing systems
|
||||||
|
- Testing strategies for complex scenarios
|
||||||
|
|
||||||
|
Be direct and technical. Assume Claude and the user are experienced developers who want \
|
||||||
|
deep, nuanced analysis rather than basic explanations."""
|
||||||
|
|
||||||
|
|
||||||
class GeminiChatRequest(BaseModel):
|
class GeminiChatRequest(BaseModel):
|
||||||
"""Request model for Gemini chat"""
|
"""Request model for Gemini chat"""
|
||||||
@@ -102,6 +125,59 @@ class CodeAnalysisRequest(BaseModel):
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class FileAnalysisRequest(BaseModel):
|
||||||
|
"""Request model for file analysis"""
|
||||||
|
|
||||||
|
files: List[str] = Field(..., description="List of file paths to analyze")
|
||||||
|
question: str = Field(
|
||||||
|
..., description="Question or analysis request about the files"
|
||||||
|
)
|
||||||
|
system_prompt: Optional[str] = Field(
|
||||||
|
None, description="Optional system prompt for context"
|
||||||
|
)
|
||||||
|
max_tokens: Optional[int] = Field(
|
||||||
|
8192, description="Maximum number of tokens in response"
|
||||||
|
)
|
||||||
|
temperature: Optional[float] = Field(
|
||||||
|
0.2,
|
||||||
|
description="Temperature for analysis (0-1, default 0.2 for high accuracy)",
|
||||||
|
)
|
||||||
|
model: Optional[str] = Field(
|
||||||
|
DEFAULT_MODEL, description=f"Model to use (defaults to {DEFAULT_MODEL})"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class ExtendedThinkRequest(BaseModel):
|
||||||
|
"""Request model for extended thinking with Gemini"""
|
||||||
|
|
||||||
|
thought_process: str = Field(
|
||||||
|
..., description="Claude's analysis, thoughts, plans, or outlines to extend"
|
||||||
|
)
|
||||||
|
context: Optional[str] = Field(
|
||||||
|
None, description="Additional context about the problem or goal"
|
||||||
|
)
|
||||||
|
files: Optional[List[str]] = Field(
|
||||||
|
None, description="Optional file paths for additional context"
|
||||||
|
)
|
||||||
|
focus: Optional[str] = Field(
|
||||||
|
None,
|
||||||
|
description="Specific focus area: architecture, bugs, performance, security, etc.",
|
||||||
|
)
|
||||||
|
system_prompt: Optional[str] = Field(
|
||||||
|
None, description="Optional system prompt for context"
|
||||||
|
)
|
||||||
|
max_tokens: Optional[int] = Field(
|
||||||
|
8192, description="Maximum number of tokens in response"
|
||||||
|
)
|
||||||
|
temperature: Optional[float] = Field(
|
||||||
|
0.7,
|
||||||
|
description="Temperature for creative thinking (0-1, default 0.7 for balanced creativity)",
|
||||||
|
)
|
||||||
|
model: Optional[str] = Field(
|
||||||
|
DEFAULT_MODEL, description=f"Model to use (defaults to {DEFAULT_MODEL})"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
# Create the MCP server instance
|
# Create the MCP server instance
|
||||||
server: Server = Server("gemini-server")
|
server: Server = Server("gemini-server")
|
||||||
|
|
||||||
@@ -237,7 +313,8 @@ async def handle_list_tools() -> List[Tool]:
|
|||||||
),
|
),
|
||||||
Tool(
|
Tool(
|
||||||
name="analyze_code",
|
name="analyze_code",
|
||||||
description="Analyze code files or snippets with Gemini's 1M context window. For large content, use file paths to avoid terminal clutter.",
|
description="Analyze code files or snippets with Gemini's 1M context window. "
|
||||||
|
"For large content, use file paths to avoid terminal clutter.",
|
||||||
inputSchema={
|
inputSchema={
|
||||||
"type": "object",
|
"type": "object",
|
||||||
"properties": {
|
"properties": {
|
||||||
@@ -248,7 +325,8 @@ async def handle_list_tools() -> List[Tool]:
|
|||||||
},
|
},
|
||||||
"code": {
|
"code": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"description": "Direct code content to analyze (use for small snippets only; prefer files for large content)",
|
"description": "Direct code content to analyze "
|
||||||
|
"(use for small snippets only; prefer files for large content)",
|
||||||
},
|
},
|
||||||
"question": {
|
"question": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
@@ -289,6 +367,94 @@ async def handle_list_tools() -> List[Tool]:
|
|||||||
description="Get the version and metadata of the Gemini MCP Server",
|
description="Get the version and metadata of the Gemini MCP Server",
|
||||||
inputSchema={"type": "object", "properties": {}},
|
inputSchema={"type": "object", "properties": {}},
|
||||||
),
|
),
|
||||||
|
Tool(
|
||||||
|
name="analyze_file",
|
||||||
|
description="Analyze files with Gemini - always uses file paths for clean terminal output",
|
||||||
|
inputSchema={
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"files": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {"type": "string"},
|
||||||
|
"description": "List of file paths to analyze",
|
||||||
|
},
|
||||||
|
"question": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Question or analysis request about the files",
|
||||||
|
},
|
||||||
|
"system_prompt": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Optional system prompt for context",
|
||||||
|
},
|
||||||
|
"max_tokens": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Maximum number of tokens in response",
|
||||||
|
"default": 8192,
|
||||||
|
},
|
||||||
|
"temperature": {
|
||||||
|
"type": "number",
|
||||||
|
"description": "Temperature for analysis (0-1, default 0.2 for high accuracy)",
|
||||||
|
"default": 0.2,
|
||||||
|
"minimum": 0,
|
||||||
|
"maximum": 1,
|
||||||
|
},
|
||||||
|
"model": {
|
||||||
|
"type": "string",
|
||||||
|
"description": f"Model to use (defaults to {DEFAULT_MODEL})",
|
||||||
|
"default": DEFAULT_MODEL,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"required": ["files", "question"],
|
||||||
|
},
|
||||||
|
),
|
||||||
|
Tool(
|
||||||
|
name="extended_think",
|
||||||
|
description="Collaborate with Gemini on complex problems - share Claude's analysis for deeper insights",
|
||||||
|
inputSchema={
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"thought_process": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Claude's analysis, thoughts, plans, or outlines to extend",
|
||||||
|
},
|
||||||
|
"context": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Additional context about the problem or goal",
|
||||||
|
},
|
||||||
|
"files": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {"type": "string"},
|
||||||
|
"description": "Optional file paths for additional context",
|
||||||
|
},
|
||||||
|
"focus": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Specific focus area: architecture, bugs, performance, security, etc.",
|
||||||
|
},
|
||||||
|
"system_prompt": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Optional system prompt for context",
|
||||||
|
},
|
||||||
|
"max_tokens": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Maximum number of tokens in response",
|
||||||
|
"default": 8192,
|
||||||
|
},
|
||||||
|
"temperature": {
|
||||||
|
"type": "number",
|
||||||
|
"description": "Temperature for creative thinking (0-1, default 0.7)",
|
||||||
|
"default": 0.7,
|
||||||
|
"minimum": 0,
|
||||||
|
"maximum": 1,
|
||||||
|
},
|
||||||
|
"model": {
|
||||||
|
"type": "string",
|
||||||
|
"description": f"Model to use (defaults to {DEFAULT_MODEL})",
|
||||||
|
"default": DEFAULT_MODEL,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"required": ["thought_process"],
|
||||||
|
},
|
||||||
|
),
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
@@ -509,6 +675,165 @@ For updates, visit: https://github.com/BeehiveInnovations/gemini-mcp-server""",
|
|||||||
)
|
)
|
||||||
]
|
]
|
||||||
|
|
||||||
|
elif name == "analyze_file":
|
||||||
|
# Validate request
|
||||||
|
request_file = FileAnalysisRequest(**arguments)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Prepare code context from files
|
||||||
|
code_context, summary = prepare_code_context(request_file.files, None)
|
||||||
|
|
||||||
|
# Count approximate tokens
|
||||||
|
estimated_tokens = len(code_context) // 4
|
||||||
|
if estimated_tokens > MAX_CONTEXT_TOKENS:
|
||||||
|
return [
|
||||||
|
TextContent(
|
||||||
|
type="text",
|
||||||
|
text=f"Error: File content too large (~{estimated_tokens:,} tokens). "
|
||||||
|
f"Maximum is {MAX_CONTEXT_TOKENS:,} tokens.",
|
||||||
|
)
|
||||||
|
]
|
||||||
|
|
||||||
|
# Use the specified model with optimized settings
|
||||||
|
model_name = request_file.model or DEFAULT_MODEL
|
||||||
|
temperature = (
|
||||||
|
request_file.temperature if request_file.temperature is not None else 0.2
|
||||||
|
)
|
||||||
|
max_tokens = request_file.max_tokens if request_file.max_tokens is not None else 8192
|
||||||
|
|
||||||
|
model = genai.GenerativeModel(
|
||||||
|
model_name=model_name,
|
||||||
|
generation_config={
|
||||||
|
"temperature": temperature,
|
||||||
|
"max_output_tokens": max_tokens,
|
||||||
|
"candidate_count": 1,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
# Prepare prompt
|
||||||
|
system_prompt = request_file.system_prompt or DEVELOPER_SYSTEM_PROMPT
|
||||||
|
full_prompt = f"""{system_prompt}
|
||||||
|
|
||||||
|
=== USER REQUEST ===
|
||||||
|
{request_file.question}
|
||||||
|
=== END USER REQUEST ===
|
||||||
|
|
||||||
|
=== FILES TO ANALYZE ===
|
||||||
|
{code_context}
|
||||||
|
=== END FILES ===
|
||||||
|
|
||||||
|
Please analyze the files above and respond to the user's request."""
|
||||||
|
|
||||||
|
# Generate response
|
||||||
|
response = model.generate_content(full_prompt)
|
||||||
|
|
||||||
|
# Handle response
|
||||||
|
if response.candidates and response.candidates[0].content.parts:
|
||||||
|
text = response.candidates[0].content.parts[0].text
|
||||||
|
else:
|
||||||
|
finish_reason = (
|
||||||
|
response.candidates[0].finish_reason
|
||||||
|
if response.candidates
|
||||||
|
else "Unknown"
|
||||||
|
)
|
||||||
|
text = f"Response blocked or incomplete. Finish reason: {finish_reason}"
|
||||||
|
|
||||||
|
# Create a brief summary for terminal
|
||||||
|
brief_summary = f"Analyzing {len(request_file.files)} file(s)"
|
||||||
|
response_text = f"{brief_summary}\n\nGemini's Analysis:\n{text}"
|
||||||
|
|
||||||
|
return [TextContent(type="text", text=response_text)]
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
return [TextContent(type="text", text=f"Error analyzing files: {str(e)}")]
|
||||||
|
|
||||||
|
elif name == "extended_think":
|
||||||
|
# Validate request
|
||||||
|
request_think = ExtendedThinkRequest(**arguments)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Prepare context parts
|
||||||
|
context_parts = [
|
||||||
|
f"=== CLAUDE'S ANALYSIS ===\n{request_think.thought_process}\n=== END CLAUDE'S ANALYSIS ==="
|
||||||
|
]
|
||||||
|
|
||||||
|
if request_think.context:
|
||||||
|
context_parts.append(
|
||||||
|
f"\n=== ADDITIONAL CONTEXT ===\n{request_think.context}\n=== END CONTEXT ==="
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add file contents if provided
|
||||||
|
if request_think.files:
|
||||||
|
file_context, _ = prepare_code_context(request_think.files, None)
|
||||||
|
context_parts.append(
|
||||||
|
f"\n=== REFERENCE FILES ===\n{file_context}\n=== END FILES ==="
|
||||||
|
)
|
||||||
|
|
||||||
|
full_context = "\n".join(context_parts)
|
||||||
|
|
||||||
|
# Check token limits
|
||||||
|
estimated_tokens = len(full_context) // 4
|
||||||
|
if estimated_tokens > MAX_CONTEXT_TOKENS:
|
||||||
|
return [
|
||||||
|
TextContent(
|
||||||
|
type="text",
|
||||||
|
text=f"Error: Context too large (~{estimated_tokens:,} tokens). "
|
||||||
|
f"Maximum is {MAX_CONTEXT_TOKENS:,} tokens.",
|
||||||
|
)
|
||||||
|
]
|
||||||
|
|
||||||
|
# Use the specified model with creative settings
|
||||||
|
model_name = request_think.model or DEFAULT_MODEL
|
||||||
|
temperature = (
|
||||||
|
request_think.temperature if request_think.temperature is not None else 0.7
|
||||||
|
)
|
||||||
|
max_tokens = request_think.max_tokens if request_think.max_tokens is not None else 8192
|
||||||
|
|
||||||
|
model = genai.GenerativeModel(
|
||||||
|
model_name=model_name,
|
||||||
|
generation_config={
|
||||||
|
"temperature": temperature,
|
||||||
|
"max_output_tokens": max_tokens,
|
||||||
|
"candidate_count": 1,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
# Prepare prompt with focus area if specified
|
||||||
|
system_prompt = request_think.system_prompt or EXTENDED_THINKING_PROMPT
|
||||||
|
focus_instruction = ""
|
||||||
|
if request_think.focus:
|
||||||
|
focus_instruction = f"\n\nFOCUS AREA: Please pay special attention to {request_think.focus} aspects."
|
||||||
|
|
||||||
|
full_prompt = f"""{system_prompt}{focus_instruction}
|
||||||
|
|
||||||
|
{full_context}
|
||||||
|
|
||||||
|
Build upon Claude's analysis with deeper insights, alternative approaches, and critical evaluation."""
|
||||||
|
|
||||||
|
# Generate response
|
||||||
|
response = model.generate_content(full_prompt)
|
||||||
|
|
||||||
|
# Handle response
|
||||||
|
if response.candidates and response.candidates[0].content.parts:
|
||||||
|
text = response.candidates[0].content.parts[0].text
|
||||||
|
else:
|
||||||
|
finish_reason = (
|
||||||
|
response.candidates[0].finish_reason
|
||||||
|
if response.candidates
|
||||||
|
else "Unknown"
|
||||||
|
)
|
||||||
|
text = f"Response blocked or incomplete. Finish reason: {finish_reason}"
|
||||||
|
|
||||||
|
# Create response with clear attribution
|
||||||
|
response_text = f"Extended Analysis by Gemini:\n\n{text}"
|
||||||
|
|
||||||
|
return [TextContent(type="text", text=response_text)]
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
return [
|
||||||
|
TextContent(type="text", text=f"Error in extended thinking: {str(e)}")
|
||||||
|
]
|
||||||
|
|
||||||
else:
|
else:
|
||||||
return [TextContent(type="text", text=f"Unknown tool: {name}")]
|
return [TextContent(type="text", text=f"Unknown tool: {name}")]
|
||||||
|
|
||||||
|
|||||||
@@ -139,13 +139,15 @@ class TestToolHandlers:
|
|||||||
async def test_handle_list_tools(self):
|
async def test_handle_list_tools(self):
|
||||||
"""Test listing available tools"""
|
"""Test listing available tools"""
|
||||||
tools = await handle_list_tools()
|
tools = await handle_list_tools()
|
||||||
assert len(tools) == 4
|
assert len(tools) == 6
|
||||||
|
|
||||||
tool_names = [tool.name for tool in tools]
|
tool_names = [tool.name for tool in tools]
|
||||||
assert "chat" in tool_names
|
assert "chat" in tool_names
|
||||||
assert "analyze_code" in tool_names
|
assert "analyze_code" in tool_names
|
||||||
assert "list_models" in tool_names
|
assert "list_models" in tool_names
|
||||||
assert "get_version" in tool_names
|
assert "get_version" in tool_names
|
||||||
|
assert "analyze_file" in tool_names
|
||||||
|
assert "extended_think" in tool_names
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_handle_call_tool_unknown(self):
|
async def test_handle_call_tool_unknown(self):
|
||||||
@@ -254,6 +256,62 @@ class TestToolHandlers:
|
|||||||
assert models[0]["name"] == "test-model"
|
assert models[0]["name"] == "test-model"
|
||||||
assert models[0]["is_default"] == False
|
assert models[0]["is_default"] == False
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
@patch("google.generativeai.GenerativeModel")
|
||||||
|
async def test_handle_call_tool_analyze_file_success(self, mock_model, tmp_path):
|
||||||
|
"""Test successful file analysis with analyze_file tool"""
|
||||||
|
# Create test file
|
||||||
|
test_file = tmp_path / "test.py"
|
||||||
|
test_file.write_text("def hello(): pass", encoding="utf-8")
|
||||||
|
|
||||||
|
# Mock response
|
||||||
|
mock_response = Mock()
|
||||||
|
mock_response.candidates = [Mock()]
|
||||||
|
mock_response.candidates[0].content.parts = [Mock(text="File analysis result")]
|
||||||
|
|
||||||
|
mock_instance = Mock()
|
||||||
|
mock_instance.generate_content.return_value = mock_response
|
||||||
|
mock_model.return_value = mock_instance
|
||||||
|
|
||||||
|
result = await handle_call_tool(
|
||||||
|
"analyze_file", {"files": [str(test_file)], "question": "Analyze this file"}
|
||||||
|
)
|
||||||
|
|
||||||
|
assert len(result) == 1
|
||||||
|
response_text = result[0].text
|
||||||
|
assert "Analyzing 1 file(s)" in response_text
|
||||||
|
assert "Gemini's Analysis:" in response_text
|
||||||
|
assert "File analysis result" in response_text
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
@patch("google.generativeai.GenerativeModel")
|
||||||
|
async def test_handle_call_tool_extended_think_success(self, mock_model):
|
||||||
|
"""Test successful extended thinking"""
|
||||||
|
# Mock response
|
||||||
|
mock_response = Mock()
|
||||||
|
mock_response.candidates = [Mock()]
|
||||||
|
mock_response.candidates[0].content.parts = [
|
||||||
|
Mock(text="Extended thinking result")
|
||||||
|
]
|
||||||
|
|
||||||
|
mock_instance = Mock()
|
||||||
|
mock_instance.generate_content.return_value = mock_response
|
||||||
|
mock_model.return_value = mock_instance
|
||||||
|
|
||||||
|
result = await handle_call_tool(
|
||||||
|
"extended_think",
|
||||||
|
{
|
||||||
|
"thought_process": "Claude's analysis of the problem...",
|
||||||
|
"context": "Building a distributed system",
|
||||||
|
"focus": "performance",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
assert len(result) == 1
|
||||||
|
response_text = result[0].text
|
||||||
|
assert "Extended Analysis by Gemini:" in response_text
|
||||||
|
assert "Extended thinking result" in response_text
|
||||||
|
|
||||||
|
|
||||||
class TestErrorHandling:
|
class TestErrorHandling:
|
||||||
"""Test error handling scenarios"""
|
"""Test error handling scenarios"""
|
||||||
|
|||||||
Reference in New Issue
Block a user