diff --git a/MCP_DISCOVERY.md b/MCP_DISCOVERY.md new file mode 100644 index 0000000..e4890bf --- /dev/null +++ b/MCP_DISCOVERY.md @@ -0,0 +1,93 @@ +# How Claude Discovers and Uses MCP Servers + +## MCP Server Discovery + +When you configure an MCP server in Claude Desktop, Claude automatically discovers its capabilities through the MCP protocol: + +1. **On Startup**: Claude connects to all configured MCP servers +2. **Tool Discovery**: Claude calls `list_tools()` to discover available tools +3. **Schema Understanding**: Each tool provides its input schema, which Claude uses to understand how to call it + +## How This Gemini Server Appears in Claude + +Once configured, this Gemini MCP server provides three tools that Claude can use: + +### 1. `gemini_chat` +- Claude sees this as a way to chat with Gemini +- You can invoke it naturally: "Ask Gemini about...", "Use Gemini to..." + +### 2. `gemini_analyze_code` +- Claude recognizes this for code analysis tasks +- Triggered by: "Use Gemini to analyze this file", "Have Gemini review this code" + +### 3. `gemini_list_models` +- Lists available models +- Usually called automatically when needed + +## Natural Language Usage + +Claude is smart about understanding your intent. You don't need special syntax: + +### Examples that work: +- "Ask Gemini what it thinks about quantum computing" +- "Use Gemini to analyze the file /path/to/large/file.py" +- "Have Gemini review this code for security issues" +- "Get Gemini's opinion on this architecture" +- "Pass this to Gemini for extended analysis" + +### What happens behind the scenes: +1. Claude recognizes keywords like "Gemini", "analyze", "review" +2. Claude determines which tool to use based on context +3. Claude extracts parameters (files, questions, etc.) from your request +4. Claude calls the appropriate MCP tool +5. Claude presents the response back to you + +## Configuration in Claude Desktop + +### macOS +Add to `~/Library/Application Support/Claude/claude_desktop_config.json`: + +```json +{ + "mcpServers": { + "gemini": { + "command": "/path/to/gemini-mcp-server/venv/bin/python", + "args": ["/path/to/gemini-mcp-server/gemini_server.py"], + "env": { + "GEMINI_API_KEY": "your-api-key-here" + } + } + } +} +``` + +### Windows +Add to `%APPDATA%\Claude\claude_desktop_config.json` + +### After Configuration +1. Restart Claude Desktop +2. Claude will automatically connect to the Gemini server +3. You'll see "gemini" in the MCP servers list (bottom of Claude interface) +4. Start using natural language to invoke Gemini! + +## Verification + +To verify the server is connected: +1. Look for the MCP icon in Claude's interface +2. Ask Claude: "What MCP tools are available?" +3. Claude should list the Gemini tools + +## Troubleshooting + +If Claude doesn't recognize Gemini commands: +1. Check the MCP server icon shows "gemini" as connected +2. Verify your API key is set correctly +3. Check Claude's logs for connection errors +4. Try restarting Claude Desktop + +## Integration with Claude Code + +In Claude Code, the integration is even more seamless: +- Large file handling is automatic +- Claude will suggest using Gemini when hitting token limits +- File paths are resolved relative to your workspace \ No newline at end of file diff --git a/README.md b/README.md index 7f65847..228b802 100644 --- a/README.md +++ b/README.md @@ -2,6 +2,15 @@ A Model Context Protocol (MCP) server that enables integration with Google's Gemini models, optimized for Gemini 2.5 Pro Preview with 1M token context window. +## How It Works with Claude + +Once configured, Claude automatically discovers this server's capabilities. You can use natural language to invoke Gemini: +- "Ask Gemini about..." +- "Use Gemini to analyze this file..." +- "Have Gemini review this code..." + +See [MCP_DISCOVERY.md](MCP_DISCOVERY.md) for detailed information about how Claude discovers and uses MCP servers. + ## Features - **Chat with Gemini**: Send prompts to Gemini 2.5 Pro Preview by default @@ -115,7 +124,7 @@ When working with large files in Claude Code, you can use the Gemini server like ## Models -The server defaults to `gemini-2.5-pro-preview-06-05` which supports: +The server defaults to `gemini-2.5-pro-preview-06-05` (the latest and most capable model) which supports: - 1 million token context window - Advanced reasoning capabilities - Code understanding and analysis diff --git a/claude_config_example.json b/claude_config_example.json index b35d28a..6683529 100644 --- a/claude_config_example.json +++ b/claude_config_example.json @@ -1,8 +1,8 @@ { "mcpServers": { "gemini": { - "command": "/Users/fahad/Developer/gemini-mcp-server/venv/bin/python", - "args": ["/Users/fahad/Developer/gemini-mcp-server/gemini_server.py"], + "command": "/path/to/gemini-mcp-server/venv/bin/python", + "args": ["/path/to/gemini-mcp-server/gemini_server.py"], "env": { "GEMINI_API_KEY": "your-gemini-api-key-here" } diff --git a/gemini_server.py b/gemini_server.py index 4835135..9de42ba 100755 --- a/gemini_server.py +++ b/gemini_server.py @@ -18,8 +18,7 @@ import google.generativeai as genai # Default to Gemini 2.5 Pro Preview with maximum context -# Note: 2.5 Pro Preview has restrictions, falling back to 1.5 Pro for better reliability -DEFAULT_MODEL = "gemini-1.5-pro-latest" # More reliable, still has large context +DEFAULT_MODEL = "gemini-2.5-pro-preview-06-05" MAX_CONTEXT_TOKENS = 1000000 # 1M tokens