Merge branch 'main' into feature/docs_workflow
This commit is contained in:
53
README.md
53
README.md
@@ -10,12 +10,23 @@
|
||||
|
||||
> **📚 [Comprehensive Documentation Available](docs/)** - This README provides quick start instructions. For detailed guides, API references, architecture documentation, and development workflows, see our [complete documentation](docs/).
|
||||
|
||||
The ultimate development partner for Claude - a Model Context Protocol server that gives Claude access to Google's Gemini 2.5 Pro for extended thinking, code analysis, and problem-solving. **Automatically reads files and directories, passing their contents to Gemini for analysis within its 1M token context.**
|
||||
The ultimate development partner for Claude - a Model Context Protocol server that gives Claude access to Google's Gemini models (2.5 Pro for extended thinking, 2.0 Flash for speed) for code analysis, problem-solving, and collaborative development. **Automatically reads files and directories, passing their contents to Gemini for analysis within its 1M token context.**
|
||||
|
||||
**Features true AI orchestration with conversation continuity across tool usage** - start a task with one tool, continue with another, and maintain full context throughout. Claude and Gemini can collaborate seamlessly across multiple interactions and different tools, creating a unified development experience.
|
||||
**Features true AI orchestration with conversations that continue across tasks** - Give Claude a complex task and ask it to collaborate with Gemini.
|
||||
Claude stays in control, performs the actual work, but gets a second perspective from Gemini. Claude will talk to Gemini, work on implementation, then automatically resume the
|
||||
conversation with Gemini while maintaining the full thread.
|
||||
Claude can switch between different Gemini tools ([`thinkdeep`](#2-thinkdeep---extended-reasoning-partner) → [`chat`](#1-chat---general-development-chat--collaborative-thinking) → [`precommit`](#4-precommit---pre-commit-validation) → [`codereview`](#3-codereview---professional-code-review)) and the conversation context carries forward seamlessly.
|
||||
For example, in the video above, Claude was asked to debate SwiftUI vs UIKit with Gemini, resulting in a back-and-forth discussion rather than a simple one-shot query and response.
|
||||
|
||||
**Think of it as Claude Code _for_ Claude Code.**
|
||||
|
||||
---
|
||||
|
||||
> ⚠️ **Active Development Notice**
|
||||
> This project is under rapid development with frequent commits and changes over the past few days.
|
||||
> The goal is to expand support beyond Gemini to include additional AI models and providers.
|
||||
> **Watch this space** for new capabilities and potentially breaking changes in between updates!
|
||||
|
||||
## Quick Navigation
|
||||
|
||||
- **Getting Started**
|
||||
@@ -38,6 +49,7 @@ The ultimate development partner for Claude - a Model Context Protocol server th
|
||||
- [`analyze`](#6-analyze---smart-file-analysis) - File analysis
|
||||
|
||||
- **Advanced Topics**
|
||||
- [Model Configuration](#model-configuration) - Pro vs Flash model selection
|
||||
- [Thinking Modes](#thinking-modes---managing-token-costs--quality) - Control depth vs cost
|
||||
- [Working with Large Prompts](#working-with-large-prompts) - Bypass MCP's 25K token limit
|
||||
- [Web Search Integration](#web-search-integration) - Smart search recommendations
|
||||
@@ -588,6 +600,7 @@ All tools that work with files support **both individual files and entire direct
|
||||
**`analyze`** - Analyze files or directories
|
||||
- `files`: List of file paths or directories (required)
|
||||
- `question`: What to analyze (required)
|
||||
- `model`: pro|flash (default: server default)
|
||||
- `analysis_type`: architecture|performance|security|quality|general
|
||||
- `output_format`: summary|detailed|actionable
|
||||
- `thinking_mode`: minimal|low|medium|high|max (default: medium)
|
||||
@@ -595,11 +608,13 @@ All tools that work with files support **both individual files and entire direct
|
||||
|
||||
```
|
||||
"Use gemini to analyze the src/ directory for architectural patterns"
|
||||
"Get gemini to analyze main.py and tests/ to understand test coverage"
|
||||
"Use flash to quickly analyze main.py and tests/ to understand test coverage"
|
||||
"Use pro for deep analysis of the entire backend/ directory structure"
|
||||
```
|
||||
|
||||
**`codereview`** - Review code files or directories
|
||||
- `files`: List of file paths or directories (required)
|
||||
- `model`: pro|flash (default: server default)
|
||||
- `review_type`: full|security|performance|quick
|
||||
- `focus_on`: Specific aspects to focus on
|
||||
- `standards`: Coding standards to enforce
|
||||
@@ -607,12 +622,13 @@ All tools that work with files support **both individual files and entire direct
|
||||
- `thinking_mode`: minimal|low|medium|high|max (default: medium)
|
||||
|
||||
```
|
||||
"Use gemini to review the entire api/ directory for security issues"
|
||||
"Get gemini to review src/ with focus on performance, only show critical issues"
|
||||
"Use pro to review the entire api/ directory for security issues"
|
||||
"Use flash to quickly review src/ with focus on performance, only show critical issues"
|
||||
```
|
||||
|
||||
**`debug`** - Debug with file context
|
||||
- `error_description`: Description of the issue (required)
|
||||
- `model`: pro|flash (default: server default)
|
||||
- `error_context`: Stack trace or logs
|
||||
- `files`: Files or directories related to the issue
|
||||
- `runtime_info`: Environment details
|
||||
@@ -626,6 +642,7 @@ All tools that work with files support **both individual files and entire direct
|
||||
|
||||
**`thinkdeep`** - Extended analysis with file context
|
||||
- `current_analysis`: Your current thinking (required)
|
||||
- `model`: pro|flash (default: server default)
|
||||
- `problem_context`: Additional context
|
||||
- `focus_areas`: Specific aspects to focus on
|
||||
- `files`: Files or directories for context
|
||||
@@ -867,7 +884,31 @@ This enables better integration, error handling, and support for the dynamic con
|
||||
The server includes several configurable properties that control its behavior:
|
||||
|
||||
### Model Configuration
|
||||
- **`GEMINI_MODEL`**: `"gemini-2.5-pro-preview-06-05"` - The latest Gemini 2.5 Pro model with native thinking support
|
||||
|
||||
**Default Model (Environment Variable):**
|
||||
- **`DEFAULT_MODEL`**: Set your preferred default model globally
|
||||
- Default: `"gemini-2.5-pro-preview-06-05"` (extended thinking capabilities)
|
||||
- Alternative: `"gemini-2.0-flash-exp"` (faster responses)
|
||||
|
||||
**Per-Tool Model Selection:**
|
||||
All tools support a `model` parameter for flexible model switching:
|
||||
- **`"pro"`** → Gemini 2.5 Pro (extended thinking, slower, higher quality)
|
||||
- **`"flash"`** → Gemini 2.0 Flash (faster responses, lower cost)
|
||||
- **Full model names** → Direct model specification
|
||||
|
||||
**Examples:**
|
||||
```env
|
||||
# Set default globally in .env file
|
||||
DEFAULT_MODEL=flash
|
||||
```
|
||||
|
||||
```
|
||||
# Per-tool usage in Claude
|
||||
"Use flash to quickly analyze this function"
|
||||
"Use pro for deep architectural analysis"
|
||||
```
|
||||
|
||||
**Token Limits:**
|
||||
- **`MAX_CONTEXT_TOKENS`**: `1,000,000` - Maximum input context (1M tokens for Gemini 2.5 Pro)
|
||||
|
||||
### Temperature Defaults
|
||||
|
||||
Reference in New Issue
Block a user