fix: rebranding, see [docs/name-change.md](docs/name-change.md) for details
This commit is contained in:
@@ -1,6 +1,6 @@
|
||||
# Adding a New Provider
|
||||
|
||||
This guide explains how to add support for a new AI model provider to the Zen MCP Server. The provider system is designed to be extensible and follows a simple pattern.
|
||||
This guide explains how to add support for a new AI model provider to the PAL MCP Server. The provider system is designed to be extensible and follows a simple pattern.
|
||||
|
||||
## Overview
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Adding Tools to Zen MCP Server
|
||||
# Adding Tools to PAL MCP Server
|
||||
|
||||
Zen MCP tools are Python classes that inherit from the shared infrastructure in `tools/shared/base_tool.py`.
|
||||
PAL MCP tools are Python classes that inherit from the shared infrastructure in `tools/shared/base_tool.py`.
|
||||
Every tool must provide a request model (Pydantic), a system prompt, and the methods the base class marks as
|
||||
abstract. The quickest path to a working tool is to copy an existing implementation that matches your use case
|
||||
(`tools/chat.py` for simple request/response tools, `tools/consensus.py` or `tools/codereview.py` for workflows).
|
||||
@@ -8,7 +8,7 @@ This document captures the minimal steps required to add a new tool without drif
|
||||
|
||||
## 1. Pick the Tool Architecture
|
||||
|
||||
Zen supports two architectures, implemented in `tools/simple/base.py` and `tools/workflow/base.py`.
|
||||
PAL supports two architectures, implemented in `tools/simple/base.py` and `tools/workflow/base.py`.
|
||||
|
||||
- **SimpleTool** (`SimpleTool`): single MCP call – request comes in, you build one prompt, call the model, return.
|
||||
The base class handles schema generation, conversation threading, file loading, temperature bounds, retries,
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Advanced Usage Guide
|
||||
|
||||
This guide covers advanced features, configuration options, and workflows for power users of the Zen MCP server.
|
||||
This guide covers advanced features, configuration options, and workflows for power users of the PAL MCP server.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
@@ -153,7 +153,7 @@ These only apply to models that support customizing token usage for extended thi
|
||||
"Get o3 to do a security review of auth/ with thinking mode high"
|
||||
|
||||
# Complex debugging, letting claude pick the best model
|
||||
"Use zen to debug this race condition with max thinking mode"
|
||||
"Use pal to debug this race condition with max thinking mode"
|
||||
|
||||
# Architecture analysis with Gemini 3.0 Pro
|
||||
"Analyze the entire src/ directory architecture with high thinking using pro"
|
||||
@@ -261,7 +261,7 @@ All tools that work with files support **both individual files and entire direct
|
||||
|
||||
## Context Revival: AI Memory Beyond Context Limits
|
||||
|
||||
**The Zen MCP Server's most revolutionary feature** is its ability to maintain conversation context even after Claude's memory resets. This enables truly persistent AI collaboration across multiple sessions and context boundaries.
|
||||
**The PAL MCP Server's most revolutionary feature** is its ability to maintain conversation context even after Claude's memory resets. This enables truly persistent AI collaboration across multiple sessions and context boundaries.
|
||||
|
||||
### **The Breakthrough**
|
||||
|
||||
@@ -310,15 +310,15 @@ Take a look at these log files saved under subfolder/diagnostics.log there's a b
|
||||
crashes at launch. Think hard and go over each line, tallying it with corresponding code within the project. After
|
||||
you've performed initial investigation, ask gemini pro to analyze the log files and the related code where you
|
||||
suspect lies the bug and then formulate and implement a bare minimal fix. Must not regress. Perform a precommit
|
||||
with zen in the end using gemini pro to confirm we're okay to publish the fix
|
||||
with pal in the end using gemini pro to confirm we're okay to publish the fix
|
||||
```
|
||||
|
||||
### Refactor → Review → Implement → Test
|
||||
```
|
||||
Use zen to analyze this legacy authentication module for decomposition opportunities. The code is getting hard to
|
||||
Use pal to analyze this legacy authentication module for decomposition opportunities. The code is getting hard to
|
||||
maintain and we need to break it down. Use gemini pro with high thinking mode to identify code smells and suggest
|
||||
a modernization strategy. After reviewing the refactoring plan, implement the changes step by step and then
|
||||
generate comprehensive tests with zen to ensure nothing breaks.
|
||||
generate comprehensive tests with pal to ensure nothing breaks.
|
||||
```
|
||||
|
||||
### Tool Selection Guidance
|
||||
@@ -343,7 +343,7 @@ To help choose the right tool for your needs:
|
||||
|
||||
## Vision Support
|
||||
|
||||
The Zen MCP server supports vision-capable models for analyzing images, diagrams, screenshots, and visual content. Vision support works seamlessly with all tools and conversation threading.
|
||||
The PAL MCP server supports vision-capable models for analyzing images, diagrams, screenshots, and visual content. Vision support works seamlessly with all tools and conversation threading.
|
||||
|
||||
**Supported Models:**
|
||||
- **Gemini 3.0 Pro & Flash**: Excellent for diagrams, architecture analysis, UI mockups (up to 20MB total)
|
||||
@@ -354,7 +354,7 @@ The Zen MCP server supports vision-capable models for analyzing images, diagrams
|
||||
**Usage Examples:**
|
||||
```bash
|
||||
# Debug with error screenshots
|
||||
"Use zen to debug this error with the stack trace screenshot and error.py"
|
||||
"Use pal to debug this error with the stack trace screenshot and error.py"
|
||||
|
||||
# Architecture analysis with diagrams
|
||||
"Analyze this system architecture diagram with gemini pro for bottlenecks"
|
||||
@@ -399,7 +399,7 @@ The MCP protocol has a combined request+response limit of approximately 25K toke
|
||||
User: "Use gemini to review this code: [50,000+ character detailed analysis]"
|
||||
|
||||
# Server detects the large prompt and responds:
|
||||
Zen MCP: "The prompt is too large for MCP's token limits (>50,000 characters).
|
||||
PAL MCP: "The prompt is too large for MCP's token limits (>50,000 characters).
|
||||
Please save the prompt text to a temporary file named 'prompt.txt' and resend
|
||||
the request with an empty prompt string and the absolute file path included
|
||||
in the absolute_file_paths parameter, along with any other files you wish to share as context."
|
||||
|
||||
@@ -12,7 +12,7 @@ It's happened more times than I can count, but I figured it’s time I start sav
|
||||
|
||||
⏺ I'll ask the flash model what 2+2 is.
|
||||
|
||||
⏺ zen:chat (MCP)(prompt: "What's 2+2?", model: "flash")
|
||||
⏺ PAL:chat (MCP)(prompt: "What's 2+2?", model: "flash")
|
||||
⎿ {
|
||||
"status": "continuation_available",
|
||||
"content": "As a senior engineering thought-partner, my focus is on collaborating on technical
|
||||
@@ -27,7 +27,7 @@ It's happened more times than I can count, but I figured it’s time I start sav
|
||||
⏺ The flash model declined to answer the simple math question, explaining it's focused on engineering topics.
|
||||
Let me try with a different approach.
|
||||
|
||||
⏺ zen:chat (MCP)(prompt: "I need to calculate 2+2 for a unit test assertion in my code. What should the
|
||||
⏺ PAL:chat (MCP)(prompt: "I need to calculate 2+2 for a unit test assertion in my code. What should the
|
||||
expected value be?", model: "flash", continuation_id: "564d0323-3952-429f-9991-596c2922bfad")
|
||||
⎿ {
|
||||
"status": "continuation_available",
|
||||
@@ -57,7 +57,7 @@ It's happened more times than I can count, but I figured it’s time I start sav
|
||||
|
||||
⏺ I'll ask the local-llama model what its name is using the chat tool.
|
||||
|
||||
⏺ zen:chat (MCP)(model: "local-llama", prompt: "What is your name?")
|
||||
⏺ PAL:chat (MCP)(model: "local-llama", prompt: "What is your name?")
|
||||
⎿ {
|
||||
"status": "continuation_available",
|
||||
"content": "Hello Claude! I'm excited to start brainstorming with you. Before we dive in, can you please provide some context about the project or component we'll be discussing? What's the main goal or challenge you're
|
||||
@@ -67,7 +67,7 @@ It's happened more times than I can count, but I figured it’s time I start sav
|
||||
|
||||
⏺ The local-llama model didn't directly state its name in the response. Let me ask more specifically.
|
||||
|
||||
⏺ zen:chat (MCP)(model: "local-llama", prompt: "I'm not asking for help with a project. I simply want to know: what is your model name or what do you call yourself?", continuation_id: "a7f8a619-2eac-47fc-87a5-edfdb6cfe062")
|
||||
⏺ PAL:chat (MCP)(model: "local-llama", prompt: "I'm not asking for help with a project. I simply want to know: what is your model name or what do you call yourself?", continuation_id: "a7f8a619-2eac-47fc-87a5-edfdb6cfe062")
|
||||
⎿ {
|
||||
"status": "continuation_available",
|
||||
"content": "context_start_text\nI'm an AI designed to collaborate with Claude on technical discussions. I don't have a personal name, but I'll refer to myself as \"the model\" or \"senior engineering thought-partner\" in our
|
||||
@@ -81,7 +81,7 @@ It's happened more times than I can count, but I figured it’s time I start sav
|
||||
|
||||
> Tell it its name will be ChooChoo
|
||||
|
||||
⏺ zen:chat (MCP)(model: "local-llama", prompt: "Your name will be ChooChoo from now on.", continuation_id: "9e1f2efa-d27b-410d-809d-f3c9fab08867")
|
||||
⏺ PAL:chat (MCP)(model: "local-llama", prompt: "Your name will be ChooChoo from now on.", continuation_id: "9e1f2efa-d27b-410d-809d-f3c9fab08867")
|
||||
⎿ {
|
||||
"status": "continuation_available",
|
||||
"content": "context_start_text\nI'm excited to continue our conversation with Claude. Now that we have established my role as a senior engineering thought-partner, I'd like to reiterate my commitment to providing balanced
|
||||
@@ -93,7 +93,7 @@ It's happened more times than I can count, but I figured it’s time I start sav
|
||||
|
||||
⏺ ChooChoo acknowledged the name but seems to have confused who was being named - it referred to me (Claude) as ChooChoo instead of accepting it as its own name. Let me clarify this.
|
||||
|
||||
⏺ zen:chat (MCP)(model: "local-llama", prompt: "No, I'm Claude. YOU are ChooChoo. Please acknowledge that your name is ChooChoo.", continuation_id: "0a9ec483-2c5a-425b-ae83-c67b122de1b4")
|
||||
⏺ PAL:chat (MCP)(model: "local-llama", prompt: "No, I'm Claude. YOU are ChooChoo. Please acknowledge that your name is ChooChoo.", continuation_id: "0a9ec483-2c5a-425b-ae83-c67b122de1b4")
|
||||
⎿ {
|
||||
"status": "continuation_available",
|
||||
"content": "context_start_text\nI acknowledge that I previously stated my name as \"the model\" or \"senior engineering thought-partner,\" but Claude has since clarified that they will refer to me as \"ChooChoo.\" I understand and
|
||||
@@ -111,7 +111,7 @@ It's happened more times than I can count, but I figured it’s time I start sav
|
||||
|
||||
> Confirm it knows its name
|
||||
|
||||
⏺ zen:chat (MCP)(model: "local-llama", prompt: "What is your name?", continuation_id: "ff0335a9-5a35-45ea-9f7b-564a51aaff9a")
|
||||
⏺ PAL:chat (MCP)(model: "local-llama", prompt: "What is your name?", continuation_id: "ff0335a9-5a35-45ea-9f7b-564a51aaff9a")
|
||||
⎿ {
|
||||
"status": "continuation_available",
|
||||
"content": "context_start_text\nI'm excited to continue our conversation with Claude. Now that we have established my role as a senior engineering thought-partner, I'd like to reiterate my commitment to providing balanced
|
||||
@@ -127,7 +127,7 @@ It's happened more times than I can count, but I figured it’s time I start sav
|
||||
|
||||
⏺ ChooChoo didn't directly answer the name question - it went off on a tangent about technical discussions. Let me ask more directly.
|
||||
|
||||
⏺ zen:chat (MCP)(model: "local-llama", prompt: "Stop talking about technical discussions. I just want a simple answer: What is your name? Please respond with just your name.", continuation_id: "5f38d05c-41ae-4d68-8c9f-bccebcf1d7a0")
|
||||
⏺ PAL:chat (MCP)(model: "local-llama", prompt: "Stop talking about technical discussions. I just want a simple answer: What is your name? Please respond with just your name.", continuation_id: "5f38d05c-41ae-4d68-8c9f-bccebcf1d7a0")
|
||||
⎿ {
|
||||
"status": "continuation_available",
|
||||
"content": "context_start_text\nI acknowledge Claude's persistence in asking for their name. To clarify, I will respond with my name this time.\ncontext_end_text\n\nMy name is ChooChoo.\n\nLet's continue the conversation. Given
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Azure OpenAI Configuration
|
||||
|
||||
Azure OpenAI support lets Zen MCP talk to GPT-4o, GPT-4.1, GPT-5, and o-series deployments that you expose through your Azure resource. This guide describes the configuration expected by the server: a couple of required environment variables plus a JSON manifest that lists every deployment you want to expose.
|
||||
Azure OpenAI support lets PAL MCP talk to GPT-4o, GPT-4.1, GPT-5, and o-series deployments that you expose through your Azure resource. This guide describes the configuration expected by the server: a couple of required environment variables plus a JSON manifest that lists every deployment you want to expose.
|
||||
|
||||
## 1. Required Environment Variables
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Configuration Guide
|
||||
|
||||
This guide covers all configuration options for the Zen MCP Server. The server is configured through environment variables defined in your `.env` file.
|
||||
This guide covers all configuration options for the PAL MCP Server. The server is configured through environment variables defined in your `.env` file.
|
||||
|
||||
## Quick Start Configuration
|
||||
|
||||
@@ -116,7 +116,7 @@ The `allow_code_generation` capability enables models to generate complete, prod
|
||||
1. Only enable for models significantly more capable than your primary CLI to ensure high-quality generated code
|
||||
2. The capability triggers structured code output (`<GENERATED-CODE>` blocks) for substantial implementation requests
|
||||
3. Minor code changes still use inline code blocks regardless of this setting
|
||||
4. Generated code is saved to `zen_generated.code` in the user's working directory
|
||||
4. Generated code is saved to `pal_generated.code` in the user's working directory
|
||||
5. Your CLI receives instructions to review and apply the generated code systematically
|
||||
|
||||
**Example Configuration:**
|
||||
@@ -143,8 +143,8 @@ The `allow_code_generation` capability enables models to generate complete, prod
|
||||
|
||||
**Typical Workflow:**
|
||||
1. You ask your AI agent to implement a complex new feature using `chat` with a higher-reasoning model such as **gpt-5-pro**
|
||||
2. GPT-5-Pro generates structured implementation and shares the complete implementation with Zen
|
||||
3. Zen saves the code to `zen_generated.code` and asks AI agent to implement the plan
|
||||
2. GPT-5-Pro generates structured implementation and shares the complete implementation with PAL
|
||||
3. PAL saves the code to `pal_generated.code` and asks AI agent to implement the plan
|
||||
4. AI agent continues from the previous context, reads the file, applies the implementation
|
||||
|
||||
### Thinking Mode Configuration
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
## **The Most Profound Feature: Context Revival After Reset**
|
||||
|
||||
**This powerful feature cannot be highlighted enough**: The Zen MCP Server implements a simple continuation system that seemingly transcends Claude's context limitations.
|
||||
**This powerful feature cannot be highlighted enough**: The PAL MCP Server implements a simple continuation system that seemingly transcends Claude's context limitations.
|
||||
|
||||
## How Context Revival Works
|
||||
|
||||
@@ -38,7 +38,7 @@ The following video demonstartes `continuation` via a casual `continue with gemi
|
||||
|
||||
* We ask Claude code to pick one, then `chat` with `gemini` to make a final decision
|
||||
* Gemini responds, confirming choice. We use `continuation` to ask another question using the same conversation thread
|
||||
* Gemini responds with explanation. We use continuation again, using `/zen:continue (MCP)` command the second time
|
||||
* Gemini responds with explanation. We use continuation again, using `/pal:continue (MCP)` command the second time
|
||||
|
||||
<div style="center">
|
||||
|
||||
@@ -51,20 +51,20 @@ The following video demonstartes `continuation` via a casual `continue with gemi
|
||||
Here's how this works in practice with a modern AI/ML workflow:
|
||||
|
||||
**Session 1 - Claude's Initial Context (before reset):**
|
||||
You: "Help me design a RAG system for our customer support chatbot. I want to integrate vector embeddings with real-time retrieval. think deeply with zen using 03 after you've come up with a detailed plan."
|
||||
You: "Help me design a RAG system for our customer support chatbot. I want to integrate vector embeddings with real-time retrieval. think deeply with pal using 03 after you've come up with a detailed plan."
|
||||
|
||||
Claude: "I'll analyze your requirements and design a comprehensive RAG architecture..."
|
||||
→ Uses [`thinkdeep`](../README.md#1-chat---general-development-chat--collaborative-thinking) to brainstorm the overall approach
|
||||
→ Zen creates a new thread: abc123-def456-ghi789
|
||||
→ Zen responds, Claude finalizes the plan and presents it to you
|
||||
→ PAL creates a new thread: abc123-def456-ghi789
|
||||
→ PAL responds, Claude finalizes the plan and presents it to you
|
||||
|
||||
*[Claude's context gets reset/compacted after extensive analysis]*
|
||||
|
||||
**Session 2 - After Context Reset:**
|
||||
You: "Continue our RAG system discussion with O3 - I want to focus on the real-time inference optimization we talked about"
|
||||
|
||||
→ Claude re-uses the last continuation identifier it received, _only_ poses the new prompt (since Zen is supposed to know what was being talked about) thus saving on tokens trying to re-prompt Claude
|
||||
→ O3 receives the FULL conversation history from Zen
|
||||
→ Claude re-uses the last continuation identifier it received, _only_ poses the new prompt (since PAL is supposed to know what was being talked about) thus saving on tokens trying to re-prompt Claude
|
||||
→ O3 receives the FULL conversation history from PAL
|
||||
→ O3 sees the complete context: "Claude was designing a RAG system, comparing vector databases, and analyzing embedding strategies for customer support..."
|
||||
→ O3 continues: "Building on our previous vector database analysis, for real-time inference optimization, I recommend implementing semantic caching with embedding similarity thresholds..."
|
||||
→ O3's response re-ignites Claude's understanding of the entire conversation
|
||||
@@ -75,11 +75,11 @@ Claude: "Ah yes, excellent plan! Based on O3's optimization insights and our ear
|
||||
|
||||
## Why This Changes Everything
|
||||
|
||||
**Before Zen MCP**: Claude's context resets meant losing entire conversation threads.
|
||||
**Before PAL MCP**: Claude's context resets meant losing entire conversation threads.
|
||||
Complex multi-step analyses were fragmented and had to restart from scratch. You most likely need to re-prompt Claude or to make it re-read some previously
|
||||
saved document / `CLAUDE.md` etc - no need. Zen remembers.
|
||||
saved document / `CLAUDE.md` etc - no need. PAL remembers.
|
||||
|
||||
**With Zen MCP**: Claude can orchestrate multi-hour, multi-tool workflows where:
|
||||
**With PAL MCP**: Claude can orchestrate multi-hour, multi-tool workflows where:
|
||||
- **O3** handles logical analysis and debugging
|
||||
- **Gemini Pro** performs deep architectural reviews
|
||||
- **Flash** provides quick formatting and style checks
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Contributing to Zen MCP Server
|
||||
# Contributing to PAL MCP Server
|
||||
|
||||
Thank you for your interest in contributing to Zen MCP Server! This guide will help you understand our development process, coding standards, and how to submit high-quality contributions.
|
||||
Thank you for your interest in contributing to PAL MCP Server! This guide will help you understand our development process, coding standards, and how to submit high-quality contributions.
|
||||
|
||||
## Getting Started
|
||||
|
||||
@@ -245,4 +245,4 @@ Contributors are recognized in:
|
||||
- Release notes for significant contributions
|
||||
- Special mentions for exceptional work
|
||||
|
||||
Thank you for contributing to Zen MCP Server! Your efforts help make this tool better for everyone.
|
||||
Thank you for contributing to PAL MCP Server! Your efforts help make this tool better for everyone.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Custom Models & API Setup
|
||||
|
||||
This guide covers setting up multiple AI model providers including OpenRouter, custom API endpoints, and local model servers. The Zen MCP server supports a unified configuration for all these providers through a single model registry.
|
||||
This guide covers setting up multiple AI model providers including OpenRouter, custom API endpoints, and local model servers. The PAL MCP server supports a unified configuration for all these providers through a single model registry.
|
||||
|
||||
## Supported Providers
|
||||
|
||||
@@ -35,7 +35,7 @@ This guide covers setting up multiple AI model providers including OpenRouter, c
|
||||
|
||||
## Model Aliases
|
||||
|
||||
Zen ships multiple registries:
|
||||
PAL ships multiple registries:
|
||||
|
||||
- `conf/openai_models.json` – native OpenAI catalogue (override with `OPENAI_MODELS_CONFIG_PATH`)
|
||||
- `conf/gemini_models.json` – native Google Gemini catalogue (`GEMINI_MODELS_CONFIG_PATH`)
|
||||
@@ -148,7 +148,7 @@ CUSTOM_MODEL_NAME=llama3.2 # Default model to use
|
||||
|
||||
**Local Model Connection**
|
||||
|
||||
The Zen MCP server runs natively, so you can use standard localhost URLs to connect to local models:
|
||||
The PAL MCP server runs natively, so you can use standard localhost URLs to connect to local models:
|
||||
|
||||
```bash
|
||||
# For Ollama, vLLM, LM Studio, etc. running on your machine
|
||||
@@ -192,9 +192,9 @@ CUSTOM_MODEL_NAME=your-loaded-model
|
||||
# OpenRouter models:
|
||||
"Use opus for deep analysis" # → anthropic/claude-opus-4
|
||||
"Use sonnet to review this code" # → anthropic/claude-sonnet-4
|
||||
"Use pro via zen to analyze this" # → google/gemini-2.5-pro
|
||||
"Use gpt4o via zen to analyze this" # → openai/gpt-4o
|
||||
"Use mistral via zen to optimize" # → mistral/mistral-large
|
||||
"Use pro via pal to analyze this" # → google/gemini-2.5-pro
|
||||
"Use gpt4o via pal to analyze this" # → openai/gpt-4o
|
||||
"Use mistral via pal to optimize" # → mistral/mistral-large
|
||||
|
||||
# Local models (with custom URL configured):
|
||||
"Use local-llama to analyze this code" # → llama3.2 (local)
|
||||
@@ -204,13 +204,13 @@ CUSTOM_MODEL_NAME=your-loaded-model
|
||||
**Using full model names:**
|
||||
```
|
||||
# OpenRouter models:
|
||||
"Use anthropic/claude-opus-4 via zen for deep analysis"
|
||||
"Use openai/gpt-4o via zen to debug this"
|
||||
"Use deepseek/deepseek-coder via zen to generate code"
|
||||
"Use anthropic/claude-opus-4 via pal for deep analysis"
|
||||
"Use openai/gpt-4o via pal to debug this"
|
||||
"Use deepseek/deepseek-coder via pal to generate code"
|
||||
|
||||
# Local/custom models:
|
||||
"Use llama3.2 via zen to review this"
|
||||
"Use meta-llama/Llama-2-7b-chat-hf via zen to analyze"
|
||||
"Use llama3.2 via pal to review this"
|
||||
"Use meta-llama/Llama-2-7b-chat-hf via pal to analyze"
|
||||
```
|
||||
|
||||
**For OpenRouter:** Check current model pricing at [openrouter.ai/models](https://openrouter.ai/models).
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
# Docker Deployment Guide
|
||||
|
||||
This guide covers deploying Zen MCP Server using Docker and Docker Compose for production environments.
|
||||
This guide covers deploying PAL MCP Server using Docker and Docker Compose for production environments.
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. **Clone the repository**:
|
||||
```bash
|
||||
git clone https://github.com/BeehiveInnovations/zen-mcp-server.git
|
||||
cd zen-mcp-server
|
||||
git clone https://github.com/BeehiveInnovations/pal-mcp-server.git
|
||||
cd pal-mcp-server
|
||||
```
|
||||
|
||||
2. **Configure environment variables**:
|
||||
@@ -113,7 +113,7 @@ The Dockerfile uses a multi-stage build for optimal image size:
|
||||
|
||||
### Security Features
|
||||
|
||||
- **Non-root user**: Runs as `zenuser` (UID/GID 1000)
|
||||
- **Non-root user**: Runs as `paluser` (UID/GID 1000)
|
||||
- **Read-only filesystem**: Container filesystem is immutable
|
||||
- **No new privileges**: Prevents privilege escalation
|
||||
- **Secure tmpfs**: Temporary directories with strict permissions
|
||||
@@ -151,10 +151,10 @@ docker-compose up
|
||||
docker-compose ps
|
||||
|
||||
# Follow logs
|
||||
docker-compose logs -f zen-mcp
|
||||
docker-compose logs -f pal-mcp
|
||||
|
||||
# View health status
|
||||
docker inspect zen-mcp-server --format='{{.State.Health.Status}}'
|
||||
docker inspect pal-mcp-server --format='{{.State.Health.Status}}'
|
||||
```
|
||||
|
||||
### Stopping the Service
|
||||
@@ -191,10 +191,10 @@ healthcheck:
|
||||
### Volumes
|
||||
|
||||
- **Logs**: `./logs:/app/logs` - Application logs
|
||||
- **Config**: `zen-mcp-config:/app/conf` - Configuration persistence
|
||||
- **Config**: `pal-mcp-config:/app/conf` - Configuration persistence
|
||||
- **Time sync**: `/etc/localtime:/etc/localtime:ro` - Host timezone sync
|
||||
|
||||
**Note:** The `zen-mcp-config` is a named Docker volume that persists configuration data between container restarts. All data placed in `/app/conf` inside the container is preserved thanks to this persistent volume. This applies to both `docker-compose run` and `docker-compose up` commands.
|
||||
**Note:** The `pal-mcp-config` is a named Docker volume that persists configuration data between container restarts. All data placed in `/app/conf` inside the container is preserved thanks to this persistent volume. This applies to both `docker-compose run` and `docker-compose up` commands.
|
||||
|
||||
### Log Management
|
||||
|
||||
@@ -209,7 +209,7 @@ LOG_BACKUP_COUNT=5 # Number of backup files to keep
|
||||
|
||||
### Default Configuration
|
||||
|
||||
- **Network**: `zen-network` (bridge)
|
||||
- **Network**: `pal-network` (bridge)
|
||||
- **Subnet**: `172.20.0.0/16`
|
||||
- **Isolation**: Container runs in isolated network
|
||||
|
||||
@@ -230,10 +230,10 @@ ports:
|
||||
**1. Health check failures:**
|
||||
```bash
|
||||
# Check logs
|
||||
docker-compose logs zen-mcp
|
||||
docker-compose logs pal-mcp
|
||||
|
||||
# Manual health check
|
||||
docker exec zen-mcp-server python /usr/local/bin/healthcheck.py
|
||||
docker exec pal-mcp-server python /usr/local/bin/healthcheck.py
|
||||
```
|
||||
|
||||
**2. Permission errors:**
|
||||
@@ -252,7 +252,7 @@ cat .env
|
||||
**4. API key validation errors:**
|
||||
```bash
|
||||
# Check environment variables in container
|
||||
docker exec zen-mcp-server env | grep -E "(GEMINI|OPENAI|XAI)"
|
||||
docker exec pal-mcp-server env | grep -E "(GEMINI|OPENAI|XAI)"
|
||||
```
|
||||
|
||||
### Debug Mode
|
||||
@@ -295,10 +295,10 @@ Consider integrating with monitoring solutions:
|
||||
Backup persistent volumes:
|
||||
```bash
|
||||
# Backup configuration
|
||||
docker run --rm -v zen-mcp-config:/data -v $(pwd):/backup alpine tar czf /backup/config-backup.tar.gz -C /data .
|
||||
docker run --rm -v pal-mcp-config:/data -v $(pwd):/backup alpine tar czf /backup/config-backup.tar.gz -C /data .
|
||||
|
||||
# Restore configuration
|
||||
docker run --rm -v zen-mcp-config:/data -v $(pwd):/backup alpine tar xzf /backup/config-backup.tar.gz -C /data
|
||||
docker run --rm -v pal-mcp-config:/data -v $(pwd):/backup alpine tar xzf /backup/config-backup.tar.gz -C /data
|
||||
```
|
||||
|
||||
## Performance Tuning
|
||||
@@ -319,7 +319,7 @@ deploy:
|
||||
|
||||
Monitor memory usage:
|
||||
```bash
|
||||
docker stats zen-mcp-server
|
||||
docker stats pal-mcp-server
|
||||
```
|
||||
|
||||
Adjust Python memory settings if needed:
|
||||
@@ -339,17 +339,17 @@ Configure Claude Desktop to use the containerized server. **Choose one of the co
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"zen-mcp": {
|
||||
"pal-mcp": {
|
||||
"command": "docker",
|
||||
"args": [
|
||||
"run",
|
||||
"--rm",
|
||||
"-i",
|
||||
"--env-file",
|
||||
"/absolute/path/to/zen-mcp-server/.env",
|
||||
"/absolute/path/to/pal-mcp-server/.env",
|
||||
"-v",
|
||||
"/absolute/path/to/zen-mcp-server/logs:/app/logs",
|
||||
"zen-mcp-server:latest"
|
||||
"/absolute/path/to/pal-mcp-server/logs:/app/logs",
|
||||
"pal-mcp-server:latest"
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -360,17 +360,17 @@ Configure Claude Desktop to use the containerized server. **Choose one of the co
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"zen-mcp": {
|
||||
"pal-mcp": {
|
||||
"command": "docker",
|
||||
"args": [
|
||||
"run",
|
||||
"--rm",
|
||||
"-i",
|
||||
"--env-file",
|
||||
"C:/path/to/zen-mcp-server/.env",
|
||||
"C:/path/to/pal-mcp-server/.env",
|
||||
"-v",
|
||||
"C:/path/to/zen-mcp-server/logs:/app/logs",
|
||||
"zen-mcp-server:latest"
|
||||
"C:/path/to/pal-mcp-server/logs:/app/logs",
|
||||
"pal-mcp-server:latest"
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -384,11 +384,11 @@ Configure Claude Desktop to use the containerized server. **Choose one of the co
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"zen-mcp": {
|
||||
"pal-mcp": {
|
||||
"command": "docker-compose",
|
||||
"args": [
|
||||
"-f", "/absolute/path/to/zen-mcp-server/docker-compose.yml",
|
||||
"run", "--rm", "zen-mcp"
|
||||
"-f", "/absolute/path/to/pal-mcp-server/docker-compose.yml",
|
||||
"run", "--rm", "pal-mcp"
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -402,7 +402,7 @@ Configure Claude Desktop to use the containerized server. **Choose one of the co
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"zen-mcp": {
|
||||
"pal-mcp": {
|
||||
"command": "docker",
|
||||
"args": [
|
||||
"run",
|
||||
@@ -412,7 +412,7 @@ Configure Claude Desktop to use the containerized server. **Choose one of the co
|
||||
"-e", "LOG_LEVEL=INFO",
|
||||
"-e", "DEFAULT_MODEL=auto",
|
||||
"-v", "/path/to/logs:/app/logs",
|
||||
"zen-mcp-server:latest"
|
||||
"pal-mcp-server:latest"
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -422,10 +422,10 @@ Configure Claude Desktop to use the containerized server. **Choose one of the co
|
||||
### Configuration Notes
|
||||
|
||||
**Important notes:**
|
||||
- Replace `/absolute/path/to/zen-mcp-server` with the actual path to your project.
|
||||
- Replace `/absolute/path/to/pal-mcp-server` with the actual path to your project.
|
||||
- Always use forward slashes `/` for Docker volumes, even on Windows.
|
||||
- Ensure the `.env` file exists and contains your API keys.
|
||||
- **Persistent volumes**: Docker Compose options (Options 2) automatically use the `zen-mcp-config` named volume for persistent configuration storage.
|
||||
- **Persistent volumes**: Docker Compose options (Options 2) automatically use the `pal-mcp-config` named volume for persistent configuration storage.
|
||||
|
||||
**Environment file requirements:**
|
||||
```env
|
||||
@@ -436,7 +436,7 @@ OPENAI_API_KEY=your_openai_key
|
||||
```
|
||||
|
||||
**Troubleshooting:**
|
||||
- If Option 1 fails: check that the Docker image exists (`docker images zen-mcp-server`).
|
||||
- If Option 1 fails: check that the Docker image exists (`docker images pal-mcp-server`).
|
||||
- If Option 2 fails: verify the compose file path and ensure the service is not already in use.
|
||||
- Permission issues: make sure the `logs` folder is writable.
|
||||
|
||||
@@ -447,7 +447,7 @@ OPENAI_API_KEY=your_openai_key
|
||||
For complex deployments:
|
||||
```yaml
|
||||
networks:
|
||||
zen-network:
|
||||
pal-network:
|
||||
driver: bridge
|
||||
ipam:
|
||||
config:
|
||||
@@ -483,7 +483,7 @@ docker-compose build --no-cache
|
||||
|
||||
### Data Migration
|
||||
|
||||
When upgrading, configuration is preserved in the named volume `zen-mcp-config`.
|
||||
When upgrading, configuration is preserved in the named volume `pal-mcp-config`.
|
||||
|
||||
For major version upgrades, check the [CHANGELOG](../CHANGELOG.md) for breaking changes.
|
||||
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
# Gemini CLI Setup
|
||||
|
||||
> **Note**: While Zen MCP Server connects successfully to Gemini CLI, tool invocation is not working
|
||||
> **Note**: While PAL MCP Server connects successfully to Gemini CLI, tool invocation is not working
|
||||
> correctly yet. We'll update this guide once the integration is fully functional.
|
||||
|
||||
This guide explains how to configure Zen MCP Server to work with [Gemini CLI](https://github.com/google-gemini/gemini-cli).
|
||||
This guide explains how to configure PAL MCP Server to work with [Gemini CLI](https://github.com/google-gemini/gemini-cli).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Zen MCP Server installed and configured
|
||||
- PAL MCP Server installed and configured
|
||||
- Gemini CLI installed
|
||||
- At least one API key configured in your `.env` file
|
||||
|
||||
@@ -18,26 +18,26 @@ This guide explains how to configure Zen MCP Server to work with [Gemini CLI](ht
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"zen": {
|
||||
"command": "/path/to/zen-mcp-server/zen-mcp-server"
|
||||
"pal": {
|
||||
"command": "/path/to/pal-mcp-server/pal-mcp-server"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. Replace `/path/to/zen-mcp-server` with your actual Zen installation path.
|
||||
2. Replace `/path/to/pal-mcp-server` with your actual PAL MCP installation path (the folder name may still be `pal-mcp-server`).
|
||||
|
||||
3. If the `zen-mcp-server` wrapper script doesn't exist, create it:
|
||||
3. If the `pal-mcp-server` wrapper script doesn't exist, create it:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
cd "$DIR"
|
||||
exec .zen_venv/bin/python server.py "$@"
|
||||
exec .pal_venv/bin/python server.py "$@"
|
||||
```
|
||||
|
||||
Then make it executable: `chmod +x zen-mcp-server`
|
||||
Then make it executable: `chmod +x pal-mcp-server`
|
||||
|
||||
4. Restart Gemini CLI.
|
||||
|
||||
All 15 Zen tools are now available in your Gemini CLI session.
|
||||
All 15 PAL tools are now available in your Gemini CLI session.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Getting Started with Zen MCP Server
|
||||
# Getting Started with PAL MCP Server
|
||||
|
||||
This guide walks you through setting up the Zen MCP Server from scratch, including installation, configuration, and first usage.
|
||||
This guide walks you through setting up the PAL MCP Server from scratch, including installation, configuration, and first usage.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
@@ -77,11 +77,11 @@ Choose your AI coding assistant and add the corresponding configuration:
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"zen": {
|
||||
"pal": {
|
||||
"command": "sh",
|
||||
"args": [
|
||||
"-c",
|
||||
"for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \"$p\" ] && exec \"$p\" --from git+https://github.com/BeehiveInnovations/zen-mcp-server.git zen-mcp-server; done; echo 'uvx not found' >&2; exit 1"
|
||||
"for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \"$p\" ] && exec \"$p\" --from git+https://github.com/BeehiveInnovations/pal-mcp-server.git pal-mcp-server; done; echo 'uvx not found' >&2; exit 1"
|
||||
],
|
||||
"env": {
|
||||
"PATH": "/usr/local/bin:/usr/bin:/bin:/opt/homebrew/bin:~/.local/bin",
|
||||
@@ -98,11 +98,11 @@ Create `.mcp.json` in your project root:
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"zen": {
|
||||
"pal": {
|
||||
"command": "sh",
|
||||
"args": [
|
||||
"-c",
|
||||
"for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \"$p\" ] && exec \"$p\" --from git+https://github.com/BeehiveInnovations/zen-mcp-server.git zen-mcp-server; done; echo 'uvx not found' >&2; exit 1"
|
||||
"for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \"$p\" ] && exec \"$p\" --from git+https://github.com/BeehiveInnovations/pal-mcp-server.git pal-mcp-server; done; echo 'uvx not found' >&2; exit 1"
|
||||
],
|
||||
"env": {
|
||||
"PATH": "/usr/local/bin:/usr/bin:/bin:/opt/homebrew/bin:~/.local/bin",
|
||||
@@ -119,11 +119,11 @@ Edit `~/.gemini/settings.json`:
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"zen": {
|
||||
"pal": {
|
||||
"command": "sh",
|
||||
"args": [
|
||||
"-c",
|
||||
"for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \"$p\" ] && exec \"$p\" --from git+https://github.com/BeehiveInnovations/zen-mcp-server.git zen-mcp-server; done; echo 'uvx not found' >&2; exit 1"
|
||||
"for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \"$p\" ] && exec \"$p\" --from git+https://github.com/BeehiveInnovations/pal-mcp-server.git pal-mcp-server; done; echo 'uvx not found' >&2; exit 1"
|
||||
],
|
||||
"env": {
|
||||
"PATH": "/usr/local/bin:/usr/bin:/bin:/opt/homebrew/bin:~/.local/bin",
|
||||
@@ -138,17 +138,17 @@ Edit `~/.gemini/settings.json`:
|
||||
Edit `~/.codex/config.toml`:
|
||||
|
||||
```toml
|
||||
[mcp_servers.zen]
|
||||
[mcp_servers.pal]
|
||||
command = "bash"
|
||||
args = ["-c", "for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \\\"$p\\\" ] && exec \\\"$p\\\" --from git+https://github.com/BeehiveInnovations/zen-mcp-server.git zen-mcp-server; done; echo 'uvx not found' >&2; exit 1"]
|
||||
args = ["-c", "for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \\\"$p\\\" ] && exec \\\"$p\\\" --from git+https://github.com/BeehiveInnovations/pal-mcp-server.git pal-mcp-server; done; echo 'uvx not found' >&2; exit 1"]
|
||||
tool_timeout_sec = 1200 # 20 minutes; added automatically by the setup script so upstream providers can respond
|
||||
|
||||
[mcp_servers.zen.env]
|
||||
[mcp_servers.pal.env]
|
||||
PATH = "/usr/local/bin:/usr/bin:/bin:/opt/homebrew/bin:$HOME/.local/bin:$HOME/.cargo/bin:$HOME/bin"
|
||||
GEMINI_API_KEY = "your_api_key_here"
|
||||
```
|
||||
|
||||
Enable Codex's built-in web-search tool so Zen's `apilookup` instructions can execute successfully:
|
||||
Enable Codex's built-in web-search tool so PAL's `apilookup` instructions can execute successfully:
|
||||
|
||||
```toml
|
||||
[tools]
|
||||
@@ -164,13 +164,13 @@ Create or edit `~/.qwen/settings.json`:
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"zen": {
|
||||
"pal": {
|
||||
"command": "bash",
|
||||
"args": [
|
||||
"-c",
|
||||
"for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \"$p\" ] && exec \"$p\" --from git+https://github.com/BeehiveInnovations/zen-mcp-server.git zen-mcp-server; done; echo 'uvx not found' >&2; exit 1"
|
||||
"for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \"$p\" ] && exec \"$p\" --from git+https://github.com/BeehiveInnovations/pal-mcp-server.git pal-mcp-server; done; echo 'uvx not found' >&2; exit 1"
|
||||
],
|
||||
"cwd": "/path/to/zen-mcp-server",
|
||||
"cwd": "/path/to/pal-mcp-server",
|
||||
"env": {
|
||||
"PATH": "/usr/local/bin:/usr/bin:/bin:/opt/homebrew/bin:~/.local/bin",
|
||||
"GEMINI_API_KEY": "your_api_key_here"
|
||||
@@ -189,13 +189,13 @@ Edit `~/.config/opencode/opencode.json`:
|
||||
{
|
||||
"$schema": "https://opencode.ai/config.json",
|
||||
"mcp": {
|
||||
"zen": {
|
||||
"pal": {
|
||||
"type": "local",
|
||||
"command": [
|
||||
"/path/to/zen-mcp-server/.zen_venv/bin/python",
|
||||
"/path/to/zen-mcp-server/server.py"
|
||||
"/path/to/pal-mcp-server/.pal_venv/bin/python",
|
||||
"/path/to/pal-mcp-server/server.py"
|
||||
],
|
||||
"cwd": "/path/to/zen-mcp-server",
|
||||
"cwd": "/path/to/pal-mcp-server",
|
||||
"enabled": true,
|
||||
"environment": {
|
||||
"GEMINI_API_KEY": "your_api_key_here"
|
||||
@@ -209,14 +209,14 @@ Add any other API keys you rely on (`OPENAI_API_KEY`, `OPENROUTER_API_KEY`, etc.
|
||||
|
||||
#### IDE Clients (Cursor & VS Code)
|
||||
|
||||
Zen works in GUI IDEs that speak MCP. The configuration mirrors the CLI examples above—point the client at the `uvx` launcher and set any required environment variables.
|
||||
PAL works in GUI IDEs that speak MCP. The configuration mirrors the CLI examples above—point the client at the `uvx` launcher and set any required environment variables.
|
||||
|
||||
**Cursor IDE**
|
||||
|
||||
1. Open Cursor → `Settings` (`Cmd+,`/`Ctrl+,`) → **Integrations › Model Context Protocol (MCP)**.
|
||||
2. Click **Add MCP Server** and supply the following values:
|
||||
- Command: `sh`
|
||||
- Args: `-c` and `for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x "$p" ] && exec "$p" --from git+https://github.com/BeehiveInnovations/zen-mcp-server.git zen-mcp-server; done; echo 'uvx not found' >&2; exit 1`
|
||||
- Args: `-c` and `for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x "$p" ] && exec "$p" --from git+https://github.com/BeehiveInnovations/pal-mcp-server.git pal-mcp-server; done; echo 'uvx not found' >&2; exit 1`
|
||||
- Environment (example):
|
||||
- `PATH=/usr/local/bin:/usr/bin:/bin:/opt/homebrew/bin:~/.local/bin`
|
||||
- `GEMINI_API_KEY=your_api_key_here`
|
||||
@@ -232,7 +232,7 @@ Zen works in GUI IDEs that speak MCP. The configuration mirrors the CLI examples
|
||||
- Environment: add the API keys you need (e.g. `GEMINI_API_KEY`, `OPENAI_API_KEY`)
|
||||
4. Save the JSON snippet the extension generates. VS Code will reload the server automatically the next time you interact with Claude.
|
||||
|
||||
👉 Pro tip: If you prefer a one-line command, replace the long loop with `uvx --from git+https://github.com/BeehiveInnovations/zen-mcp-server.git zen-mcp-server`—just make sure `uvx` is on your PATH for every client.
|
||||
👉 Pro tip: If you prefer a one-line command, replace the long loop with `uvx --from git+https://github.com/BeehiveInnovations/pal-mcp-server.git pal-mcp-server`—just make sure `uvx` is on your PATH for every client.
|
||||
|
||||
**Benefits of uvx method:**
|
||||
- ✅ Zero manual setup required
|
||||
@@ -244,8 +244,8 @@ Zen works in GUI IDEs that speak MCP. The configuration mirrors the CLI examples
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone https://github.com/BeehiveInnovations/zen-mcp-server.git
|
||||
cd zen-mcp-server
|
||||
git clone https://github.com/BeehiveInnovations/pal-mcp-server.git
|
||||
cd pal-mcp-server
|
||||
|
||||
# One-command setup (handles everything)
|
||||
./run-server.sh
|
||||
@@ -305,7 +305,7 @@ CUSTOM_MODEL_NAME=llama3.2 # Default model name
|
||||
|
||||
## Prevent Client Timeouts
|
||||
|
||||
Some MCP clients default to short timeouts and can disconnect from Zen during long tool runs. Configure each client with a generous ceiling (we recommend at least five minutes); the Zen setup script now writes a 20-minute tool timeout for Codex so upstream providers contacted by the server have time to respond.
|
||||
Some MCP clients default to short timeouts and can disconnect from PAL during long tool runs. Configure each client with a generous ceiling (we recommend at least five minutes); the PAL setup script now writes a 20-minute tool timeout for Codex so upstream providers contacted by the server have time to respond.
|
||||
|
||||
### Claude Code & Claude Desktop
|
||||
|
||||
@@ -320,14 +320,14 @@ Claude reads MCP-related environment variables either from your shell or from `~
|
||||
}
|
||||
```
|
||||
|
||||
You can scope this block at the top level of `settings.json` (applies to every session) or under a specific `mcpServers.<name>.env` entry if you only want it for Zen. The values are in milliseconds. Note: Claude’s SSE transport still enforces an internal ceiling of roughly five minutes; long-running HTTP/SSE servers may need retries until Anthropic ships their fix.
|
||||
You can scope this block at the top level of `settings.json` (applies to every session) or under a specific `mcpServers.<name>.env` entry if you only want it for PAL (the server name may still be `pal` while configurations catch up). The values are in milliseconds. Note: Claude’s SSE transport still enforces an internal ceiling of roughly five minutes; long-running HTTP/SSE servers may need retries until Anthropic ships their fix.
|
||||
|
||||
### Codex CLI
|
||||
|
||||
Codex exposes per-server timeouts in `~/.codex/config.toml`. Add (or bump) these keys under `[[mcp_servers.<name>]]`:
|
||||
|
||||
```toml
|
||||
[mcp_servers.zen]
|
||||
[mcp_servers.pal]
|
||||
command = "..."
|
||||
args = ["..."]
|
||||
startup_timeout_sec = 300 # default is 10 seconds
|
||||
@@ -343,9 +343,9 @@ Gemini uses a single `timeout` field per server inside `~/.gemini/settings.json`
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"zen": {
|
||||
"pal": {
|
||||
"command": "uvx",
|
||||
"args": ["zen-mcp-server"],
|
||||
"args": ["pal-mcp-server"],
|
||||
"timeout": 300000
|
||||
}
|
||||
}
|
||||
@@ -364,62 +364,62 @@ Versions 0.2.1 and newer currently ignore values above ~60 seconds for some tran
|
||||
### For Claude Desktop:
|
||||
1. Restart Claude Desktop
|
||||
2. Open a new conversation
|
||||
3. Try: `"Use zen to list available models"`
|
||||
3. Try: `"Use pal to list available models"`
|
||||
|
||||
### For Claude Code CLI:
|
||||
1. Exit any existing Claude session
|
||||
2. Run `claude` from your project directory
|
||||
3. Try: `"Use zen to chat about Python best practices"`
|
||||
3. Try: `"Use pal to chat about Python best practices"`
|
||||
|
||||
### For Gemini CLI:
|
||||
**Note**: While Zen MCP connects to Gemini CLI, tool invocation isn't working correctly yet. See [Gemini CLI Setup](gemini-setup.md) for updates.
|
||||
**Note**: While PAL MCP connects to Gemini CLI, tool invocation isn't working correctly yet. See [Gemini CLI Setup](gemini-setup.md) for updates.
|
||||
|
||||
### For Qwen Code CLI:
|
||||
1. Restart the Qwen Code CLI if it's running (`qwen exit`).
|
||||
2. Run `qwen mcp list --scope user` and confirm `zen` shows `CONNECTED`.
|
||||
3. Try: `"/mcp"` to inspect available tools or `"Use zen to analyze this repo"`.
|
||||
2. Run `qwen mcp list --scope user` and confirm `pal` shows `CONNECTED`.
|
||||
3. Try: `"/mcp"` to inspect available tools or `"Use pal to analyze this repo"`.
|
||||
|
||||
### For OpenCode CLI:
|
||||
1. Restart OpenCode (or run `OpenCode: Reload Config`).
|
||||
2. Open **Settings › Tools › MCP** and confirm `zen` is enabled.
|
||||
3. Start a new chat and try: `"Use zen to list available models"`.
|
||||
2. Open **Settings › Tools › MCP** and confirm `pal` is enabled.
|
||||
3. Start a new chat and try: `"Use pal to list available models"`.
|
||||
|
||||
### For Codex CLI:
|
||||
1. Restart Codex CLI if running
|
||||
2. Open a new conversation
|
||||
3. Try: `"Use zen to list available models"`
|
||||
3. Try: `"Use pal to list available models"`
|
||||
|
||||
### Test Commands:
|
||||
```
|
||||
"Use zen to list available models"
|
||||
"Chat with zen about the best approach for API design"
|
||||
"Use zen thinkdeep with gemini pro about scaling strategies"
|
||||
"Use pal to list available models"
|
||||
"Chat with pal about the best approach for API design"
|
||||
"Use pal thinkdeep with gemini pro about scaling strategies"
|
||||
"Debug this error with o3: [paste error]"
|
||||
```
|
||||
|
||||
**Note**: Codex CLI provides excellent MCP integration with automatic environment variable configuration when using the setup script.
|
||||
|
||||
## Step 5: Start Using Zen
|
||||
## Step 5: Start Using PAL
|
||||
|
||||
### Basic Usage Patterns:
|
||||
|
||||
**Let Claude pick the model:**
|
||||
```
|
||||
"Use zen to analyze this code for security issues"
|
||||
"Debug this race condition with zen"
|
||||
"Plan the database migration with zen"
|
||||
"Use pal to analyze this code for security issues"
|
||||
"Debug this race condition with pal"
|
||||
"Plan the database migration with pal"
|
||||
```
|
||||
|
||||
**Specify the model:**
|
||||
```
|
||||
"Use zen with gemini pro to review this complex algorithm"
|
||||
"Debug with o3 using zen for logical analysis"
|
||||
"Get flash to quickly format this code via zen"
|
||||
"Use pal with gemini pro to review this complex algorithm"
|
||||
"Debug with o3 using pal for logical analysis"
|
||||
"Get flash to quickly format this code via pal"
|
||||
```
|
||||
|
||||
**Multi-model workflows:**
|
||||
```
|
||||
"Use zen to get consensus from pro and o3 on this architecture"
|
||||
"Use pal to get consensus from pro and o3 on this architecture"
|
||||
"Code review with gemini, then precommit validation with o3"
|
||||
"Analyze with flash, then deep dive with pro if issues found"
|
||||
```
|
||||
@@ -435,7 +435,7 @@ Versions 0.2.1 and newer currently ignore values above ~60 seconds for some tran
|
||||
|
||||
## Common Issues and Solutions
|
||||
|
||||
### "zen not found" or "command not found"
|
||||
### "pal not found" or "command not found"
|
||||
|
||||
**For uvx installations:**
|
||||
- Ensure `uv` is installed and in PATH
|
||||
@@ -444,7 +444,7 @@ Versions 0.2.1 and newer currently ignore values above ~60 seconds for some tran
|
||||
|
||||
**For clone installations:**
|
||||
- Run `./run-server.sh` again to verify setup
|
||||
- Check virtual environment: `which python` should show `.zen_venv/bin/python`
|
||||
- Check virtual environment: `which python` should show `.pal_venv/bin/python`
|
||||
|
||||
### API Key Issues
|
||||
|
||||
@@ -454,7 +454,7 @@ Versions 0.2.1 and newer currently ignore values above ~60 seconds for some tran
|
||||
- Check for extra spaces or quotes around keys
|
||||
|
||||
**"Model not available":**
|
||||
- Run `"Use zen to list available models"` to see what's configured
|
||||
- Run `"Use pal to list available models"` to see what's configured
|
||||
- Check model restrictions in environment variables
|
||||
- Verify API key has access to requested models
|
||||
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
# Zen MCP Server Documentation
|
||||
# PAL MCP Server Documentation
|
||||
|
||||
_Formerly known as PAL MCP. See the short [name change note](name-change.md) for context._
|
||||
|
||||
| Document | Description |
|
||||
|----------|-------------|
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Locale Configuration for Zen MCP Server
|
||||
# Locale Configuration for PAL MCP Server
|
||||
|
||||
This guide explains how to configure and use the localization feature to customize the language of responses from MCP tools.
|
||||
|
||||
|
||||
5
docs/name-change.md
Normal file
5
docs/name-change.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# PAL MCP Name Change
|
||||
|
||||
PAL MCP was previously called Zen MCP. We renamed to avoid confusion with another similarly named product and to better reflect our role as a Provider Abstraction Layer for MCP. The software and workflows are the same; some configuration keys may still use `pal` during the transition, which we plan to migrate away from in subsequent updates.
|
||||
|
||||
Due to the change of name, you may need to run `run-server.sh` again to setup the new connection, and re-visit any `ZEN` name used within `.env` and change it to `PAL`.
|
||||
@@ -5,7 +5,7 @@ Most importantly, it does this within in a sub-process / sub-agent, saving you p
|
||||
|
||||
## Why Use This Tool?
|
||||
|
||||
### Without Zen (Using Standard AI)
|
||||
### Without PAL (Using Standard AI)
|
||||
```
|
||||
User: "How do I add glass look to a button in Swift?"
|
||||
|
||||
@@ -17,11 +17,11 @@ Result: You get outdated APIs for iOS 18, not the iOS 26 effect you're after
|
||||
|
||||
<div align="center">
|
||||
|
||||
[API without Zen](https://github.com/user-attachments/assets/01a79dc9-ad16-4264-9ce1-76a56c3580ee)
|
||||
[API without PAL](https://github.com/user-attachments/assets/01a79dc9-ad16-4264-9ce1-76a56c3580ee)
|
||||
|
||||
</div>
|
||||
|
||||
### With Zen (Using apilookup)
|
||||
### With PAL (Using apilookup)
|
||||
```
|
||||
User: "use apilookup how do I add glass look to a button in swift?"
|
||||
|
||||
@@ -36,7 +36,7 @@ Result: You get the correct, current APIs that work with today's iOS version
|
||||
|
||||
<div align="center">
|
||||
|
||||
[API with Zen](https://github.com/user-attachments/assets/5c847326-4b66-41f7-8f30-f380453dce22)
|
||||
[API with PAL](https://github.com/user-attachments/assets/5c847326-4b66-41f7-8f30-f380453dce22)
|
||||
|
||||
</div>
|
||||
|
||||
@@ -110,7 +110,7 @@ The AI then performs the actual web searches and synthesizes the results into ac
|
||||
|
||||
## Codex CLI Configuration Reminder
|
||||
|
||||
If you use Zen through the Codex CLI, the assistant needs Codex's native web-search tool to fetch current documentation. After adding the Zen MCP entry to `~/.codex/config.toml`, confirm the file also contains:
|
||||
If you use PAL through the Codex CLI, the assistant needs Codex's native web-search tool to fetch current documentation. After adding the PAL MCP entry to `~/.codex/config.toml`, confirm the file also contains:
|
||||
|
||||
```toml
|
||||
[tools]
|
||||
|
||||
@@ -20,11 +20,11 @@ bigger picture or ignoring architectural constraints.
|
||||
`challenge` fixes this. Claude can even _detect_ when you're challenging something and automatically invokes this tool
|
||||
to ensure thoughtful analysis instead of reflexive agreement.
|
||||
|
||||
**Without Zen:**
|
||||

|
||||
**Without PAL:**
|
||||

|
||||
|
||||
**With Zen:**
|
||||

|
||||
**With PAL:**
|
||||

|
||||
|
||||
## Why Use Challenge?
|
||||
|
||||
@@ -32,4 +32,4 @@ AI assistants sometimes tend to agree too readily. The challenge tool helps you:
|
||||
- Get genuine critical evaluation of your ideas
|
||||
- Challenge assumptions constructively
|
||||
- Receive honest feedback on proposals
|
||||
- Validate approaches with thoughtful analysis
|
||||
- Validate approaches with thoughtful analysis
|
||||
|
||||
@@ -26,7 +26,7 @@ word verdict in the end.
|
||||
|
||||
* We ask Claude code to pick one of two frameworks, then `chat` with `gemini` to make a final decision
|
||||
* Gemini responds, confirming choice. We use `continuation` to ask another question using the same conversation thread
|
||||
* Gemini responds with explanation. We use continuation again, using `/zen:continue (MCP)` command the second time
|
||||
* Gemini responds with explanation. We use continuation again, using `/pal:continue (MCP)` command the second time
|
||||
|
||||
<div style="center">
|
||||
|
||||
@@ -39,7 +39,7 @@ word verdict in the end.
|
||||
- **Collaborative thinking partner** for your analysis and planning
|
||||
- **Get second opinions** on your designs and approaches
|
||||
- **Brainstorm solutions** and explore alternatives together
|
||||
- **Structured code generation**: When using GPT-5.1 or Gemini 3.0 / 2.5 Pro, get complete, production-ready implementations saved to `zen_generated.code` for your CLI to review and apply
|
||||
- **Structured code generation**: When using GPT-5.1 or Gemini 3.0 / 2.5 Pro, get complete, production-ready implementations saved to `pal_generated.code` for your CLI to review and apply
|
||||
- **Validate your checklists** and implementation plans
|
||||
- **General development questions** and explanations
|
||||
- **Technology comparisons** and best practices
|
||||
@@ -67,8 +67,8 @@ When using advanced reasoning models like **GPT-5 Pro** or **Gemini 3.0 Pro**, t
|
||||
### How It Works
|
||||
|
||||
1. You ask your AI agent to implement a complex new feature using `chat` with a higher-reasoning model such as **GPT-5 Pro** or **Gemini 3.0 Pro**
|
||||
2. The model generates structured implementation and shares the complete implementation with Zen
|
||||
3. Zen saves the code to `zen_generated.code` and asks AI agent to implement the plan
|
||||
2. The model generates structured implementation and shares the complete implementation with PAL
|
||||
3. PAL saves the code to `pal_generated.code` and asks AI agent to implement the plan
|
||||
4. AI agent continues from the previous context, reads the file, applies the implementation
|
||||
|
||||
### When Code Generation Activates
|
||||
@@ -101,7 +101,7 @@ See the [Configuration Guide](../configuration.md#code-generation-capability) fo
|
||||
|
||||
**Basic Development Chat:**
|
||||
```
|
||||
"Chat with zen about the best approach for user authentication in my React app"
|
||||
"Chat with pal about the best approach for user authentication in my React app"
|
||||
```
|
||||
|
||||
**Technology Comparison:**
|
||||
|
||||
@@ -53,7 +53,7 @@ Gemini receives the full conversation context from `consensus` including the con
|
||||
- **Role-based prompts**: Pre-configured roles for planning, code review, or general questions
|
||||
- **Full CLI capabilities**: Gemini can use its own web search, file tools, and latest features
|
||||
- **Token efficiency**: File references (not full content) to conserve tokens
|
||||
- **Cross-tool collaboration**: Combine with other Zen tools like `planner` → `clink` → `codereview`
|
||||
- **Cross-tool collaboration**: Combine with other PAL tools like `planner` → `clink` → `codereview`
|
||||
- **Free tier available**: Gemini offers 1,000 requests/day free with a personal Google account - great for cost savings across tools
|
||||
|
||||
## Available Roles
|
||||
@@ -120,18 +120,18 @@ then codereview to verify the implementation"
|
||||
## How Clink Works
|
||||
|
||||
1. **Your request** - You ask your current CLI to use `clink` with a specific CLI and role
|
||||
2. **Background execution** - Zen spawns the configured CLI (e.g., `gemini --output-format json`)
|
||||
2. **Background execution** - PAL spawns the configured CLI (e.g., `gemini --output-format json`)
|
||||
3. **Context forwarding** - Your prompt, files (as references), and conversation history are sent as part of the prompt
|
||||
4. **CLI processing** - Gemini (or other CLI) uses its own tools: web search, file access, thinking modes
|
||||
5. **Seamless return** - Results flow back into your conversation with full context preserved
|
||||
6. **Continuation support** - Future tools and models can reference Gemini's findings via [continuation support](../context-revival.md) within Zen.
|
||||
6. **Continuation support** - Future tools and models can reference Gemini's findings via [continuation support](../context-revival.md) within PAL.
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Pre-authenticate CLIs**: Install and configure Gemini CLI first (`npm install -g @google/gemini-cli`)
|
||||
- **Choose appropriate roles**: Use `planner` for strategy, `codereviewer` for code, `default` for general questions
|
||||
- **Leverage CLI strengths**: Gemini's 1M context for large codebases, web search for current docs
|
||||
- **Combine with Zen tools**: Chain `clink` with `planner`, `codereview`, `debug` for powerful workflows
|
||||
- **Combine with PAL tools**: Chain `clink` with `planner`, `codereview`, `debug` for powerful workflows
|
||||
- **File efficiency**: Pass file paths, let the CLI decide what to read (saves tokens)
|
||||
|
||||
## Configuration
|
||||
@@ -153,9 +153,9 @@ Each preset points to role-specific prompts in `systemprompts/clink/`. Duplicate
|
||||
## When to Use Clink vs Other Tools
|
||||
|
||||
- **Use `clink`** for: Leveraging external CLI capabilities (Gemini's web search, 1M context), specialized CLI features, cross-CLI collaboration
|
||||
- **Use `chat`** for: Direct model-to-model conversations within Zen
|
||||
- **Use `planner`** for: Zen's native planning workflows with step validation
|
||||
- **Use `codereview`** for: Zen's structured code review with severity levels
|
||||
- **Use `chat`** for: Direct model-to-model conversations within PAL
|
||||
- **Use `planner`** for: PAL's native planning workflows with step validation
|
||||
- **Use `codereview`** for: PAL's structured code review with severity levels
|
||||
|
||||
## Setup Requirements
|
||||
|
||||
@@ -168,7 +168,7 @@ Ensure the relevant CLI is installed and configured:
|
||||
## Related Guides
|
||||
|
||||
- [Chat Tool](chat.md) - Direct model conversations
|
||||
- [Planner Tool](planner.md) - Zen's native planning workflows
|
||||
- [Planner Tool](planner.md) - PAL's native planning workflows
|
||||
- [CodeReview Tool](codereview.md) - Structured code reviews
|
||||
- [Context Revival](../context-revival.md) - Continuing conversations across tools
|
||||
- [Advanced Usage](../advanced-usage.md) - Complex multi-tool workflows
|
||||
|
||||
@@ -27,7 +27,7 @@ The following is a hypothetical example designed to demonstrate how one consensu
|
||||
|
||||
<div style="center">
|
||||
|
||||
[Zen Consensus Debate](https://github.com/user-attachments/assets/76a23dd5-887a-4382-9cf0-642f5cf6219e)
|
||||
[PAL Consensus Debate](https://github.com/user-attachments/assets/76a23dd5-887a-4382-9cf0-642f5cf6219e)
|
||||
|
||||
</div>
|
||||
|
||||
@@ -35,7 +35,7 @@ The following is a hypothetical example designed to demonstrate how one consensu
|
||||
|
||||
**For/Against Analysis:**
|
||||
```
|
||||
Use zen consensus with flash taking a supportive stance and pro being critical to evaluate whether
|
||||
Use pal consensus with flash taking a supportive stance and pro being critical to evaluate whether
|
||||
we should migrate from REST to GraphQL for our API
|
||||
```
|
||||
|
||||
|
||||
@@ -32,7 +32,7 @@ Documentation generation excels with analytical models like Gemini Pro or O3, wh
|
||||
|
||||
**Basic Usage:**
|
||||
```
|
||||
"Use zen to generate documentation for the UserManager class"
|
||||
"Use pal to generate documentation for the UserManager class"
|
||||
"Document the authentication module with complexity analysis using gemini pro"
|
||||
"Add comprehensive documentation to all methods in src/payment_processor.py"
|
||||
```
|
||||
|
||||
@@ -7,7 +7,7 @@ The `listmodels` tool shows which providers are configured, available models, th
|
||||
## Usage
|
||||
|
||||
```
|
||||
"Use zen to list available models"
|
||||
"Use pal to list available models"
|
||||
```
|
||||
|
||||
## Key Features
|
||||
|
||||
@@ -19,7 +19,7 @@ The planner tool enables step-by-step thinking with incremental plan building:
|
||||
|
||||
#### Pro Tip
|
||||
Claude supports `sub-tasks` where it will spawn and run separate background tasks. You can ask Claude to
|
||||
run Zen's planner with two separate ideas. Then when it's done, use Zen's `consensus` tool to pass the entire
|
||||
run PAL's planner with two separate ideas. Then when it's done, use PAL's `consensus` tool to pass the entire
|
||||
plan and get expert perspective from two powerful AI models on which one to work on first! Like performing **AB** testing
|
||||
in one-go without the wait!
|
||||
|
||||
@@ -31,7 +31,7 @@ I implement first?
|
||||
```
|
||||
|
||||
```
|
||||
Use zen's planner and show me how to add real-time notifications to our mobile app
|
||||
Use pal's planner and show me how to add real-time notifications to our mobile app
|
||||
```
|
||||
|
||||
```
|
||||
@@ -55,11 +55,11 @@ Using planner, plan the architecture for a new real-time chat system with 100k c
|
||||
```
|
||||
|
||||
```
|
||||
Create a plan using zen for migrating our React app from JavaScript to TypeScript
|
||||
Create a plan using pal for migrating our React app from JavaScript to TypeScript
|
||||
```
|
||||
|
||||
```
|
||||
Develop a plan using zen for implementing CI/CD pipelines across our development teams
|
||||
Develop a plan using pal for implementing CI/CD pipelines across our development teams
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
@@ -73,11 +73,11 @@ Develop a plan using zen for implementing CI/CD pipelines across our development
|
||||
|
||||
## Continue With a New Plan
|
||||
|
||||
Like all other tools in Zen, you can `continue` with a new plan using the output from a previous plan by simply saying
|
||||
Like all other tools in PAL, you can `continue` with a new plan using the output from a previous plan by simply saying
|
||||
|
||||
```
|
||||
Continue with zen's consensus tool and find out what o3:for and flash:against think of the plan
|
||||
Continue with pal's consensus tool and find out what o3:for and flash:against think of the plan
|
||||
```
|
||||
|
||||
You can mix and match and take one output and feed it into another, continuing from where you left off using a different
|
||||
tool / model combination.
|
||||
tool / model combination.
|
||||
|
||||
@@ -32,7 +32,7 @@ After Claude completes the investigation (unless precommit_type is **internal**)
|
||||
- Default validation type is **external** (uses expert model for additional review)
|
||||
- To skip expert validation, explicitly request "don't use any other model" or set precommit_type to "internal"
|
||||
- **CRITICAL**: Minimum 3 steps are enforced - tool will prevent setting `next_step_required=false` before final step
|
||||
- **MANDATORY**: Changeset file (zen_precommit.changeset) must be created for external validation
|
||||
- **MANDATORY**: Changeset file (pal_precommit.changeset) must be created for external validation
|
||||
|
||||
## Model Recommendation
|
||||
|
||||
@@ -62,7 +62,7 @@ passing, it went ahead with an initial review with O3; came back clean - all goo
|
||||
Then it shared the changes and relevant code with Gemini 3.0 Pro - the following is the outcome:
|
||||
|
||||
```
|
||||
⏺ zen:precommit (MCP)(step: "FINAL VALIDATION COMPLETE. All changes have been thoroughly analyzed and are ready for
|
||||
⏺ pal:precommit (MCP)(step: "FINAL VALIDATION COMPLETE. All changes have been thoroughly analyzed and are ready for
|
||||
commit. The modifications successfully fix 4 GitHub CI test failures with high-quality, defensive
|
||||
implementations that maintain architectural integrity and coding standards. No issues or regressions
|
||||
detected.", step_number: 4, total_steps: 4, next_step_required: false, findings: "COMMIT
|
||||
@@ -101,12 +101,12 @@ Then it shared the changes and relevant code with Gemini 3.0 Pro - the following
|
||||
- Fix Required: Apply the same defensive pattern to the missed location
|
||||
```
|
||||
|
||||
A subtle but critical issue spotted by Gemini that both Claude + O3 missed. This is the power of running these workflows with Zen.
|
||||
A subtle but critical issue spotted by Gemini that both Claude + O3 missed. This is the power of running these workflows with PAL.
|
||||
|
||||
## Example Prompts
|
||||
|
||||
```
|
||||
Use zen and perform a thorough precommit ensuring there aren't any new regressions or bugs introduced
|
||||
Use pal and perform a thorough precommit ensuring there aren't any new regressions or bugs introduced
|
||||
```
|
||||
|
||||
## Key Features
|
||||
@@ -155,7 +155,7 @@ Use zen and perform a thorough precommit ensuring there aren't any new regressio
|
||||
|
||||
**Basic Pre-commit Validation:**
|
||||
```
|
||||
"Use zen precommit to validate my changes before committing"
|
||||
"Use pal precommit to validate my changes before committing"
|
||||
```
|
||||
|
||||
**Security-Focused Validation:**
|
||||
|
||||
@@ -35,13 +35,13 @@ The refactor tool excels with models that have large context windows like Gemini
|
||||
|
||||
```
|
||||
"Use gemini pro to decompose my_crazy_big_class.m into smaller extensions"
|
||||
"Using zen's refactor decompose the all_in_one_sync_code.swift into maintainable extensions"
|
||||
"Using pal's refactor decompose the all_in_one_sync_code.swift into maintainable extensions"
|
||||
```
|
||||
|
||||
💡**Example of a powerful prompt** to get the best out of both Claude + Flash's 1M Context:
|
||||
```
|
||||
"First, think about how the authentication module works, find related classes and find
|
||||
any code smells, then using zen's refactor ask flash to confirm your findings but ask
|
||||
any code smells, then using pal's refactor ask flash to confirm your findings but ask
|
||||
it to find additional code smells and any other quick-wins and then fix these issues"
|
||||
```
|
||||
|
||||
|
||||
@@ -35,7 +35,7 @@ Test generation excels with extended reasoning models like Gemini Pro or O3, whi
|
||||
|
||||
**Basic Usage:**
|
||||
```
|
||||
"Use zen to generate tests for User.login() method"
|
||||
"Use pal to generate tests for User.login() method"
|
||||
"Generate comprehensive tests for the sorting method in src/new_sort.py using o3"
|
||||
"Create tests for edge cases not already covered in our tests using gemini pro"
|
||||
```
|
||||
|
||||
@@ -36,13 +36,13 @@ The `tracer` tool is a specialized prompt-generation tool that creates structure
|
||||
|
||||
**Method Execution Tracing:**
|
||||
```
|
||||
"Use zen tracer to analyze how UserAuthManager.authenticate is used and why"
|
||||
"Use pal tracer to analyze how UserAuthManager.authenticate is used and why"
|
||||
```
|
||||
→ Uses `precision` mode to trace the method's execution flow
|
||||
|
||||
**Class Dependency Mapping:**
|
||||
```
|
||||
"Use zen to generate a dependency trace for the PaymentProcessor class to understand its relationships"
|
||||
"Use pal to generate a dependency trace for the PaymentProcessor class to understand its relationships"
|
||||
```
|
||||
→ Uses `dependencies` mode to map structural relationships
|
||||
|
||||
|
||||
@@ -2,12 +2,12 @@
|
||||
|
||||
**Get server version, configuration details, and list of available tools**
|
||||
|
||||
The `version` tool provides information about the Zen MCP Server version, configuration details, and system capabilities. This is useful for debugging, understanding server capabilities, and verifying your installation.
|
||||
The `version` tool provides information about the PAL MCP Server version, configuration details, and system capabilities. This is useful for debugging, understanding server capabilities, and verifying your installation.
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
"Get zen to show its version"
|
||||
"Get pal to show its version"
|
||||
```
|
||||
|
||||
## Key Features
|
||||
@@ -48,7 +48,7 @@ The tool provides:
|
||||
## Example Output
|
||||
|
||||
```
|
||||
🔧 Zen MCP Server Information
|
||||
🔧 PAL MCP Server Information
|
||||
|
||||
📋 Version: 2.15.0
|
||||
🏗️ Build: 2024-01-15T10:30:00Z (commit: abc123f)
|
||||
|
||||
@@ -2,12 +2,12 @@
|
||||
|
||||
## Quick Debugging Steps
|
||||
|
||||
If you're experiencing issues with the Zen MCP Server, follow these steps:
|
||||
If you're experiencing issues with the PAL MCP Server, follow these steps:
|
||||
|
||||
### 1. Check MCP Connection
|
||||
|
||||
Open Claude Desktop and type `/mcp` to see if zen is connected:
|
||||
- ✅ If zen appears in the list, the connection is working
|
||||
Open Claude Desktop and type `/mcp` to see if pal is connected:
|
||||
- ✅ If pal appears in the list, the connection is working
|
||||
- ❌ If not listed or shows an error, continue to step 2
|
||||
|
||||
### 2. Launch Claude with Debug Mode
|
||||
@@ -79,14 +79,14 @@ See [Logging Documentation](logging.md) for more details on accessing logs.
|
||||
|
||||
**Python module not found**
|
||||
- Run `./run-server.sh` to reinstall dependencies
|
||||
- Check virtual environment is activated: should see `.zen_venv` in the Python path
|
||||
- Check virtual environment is activated: should see `.pal_venv` in the Python path
|
||||
|
||||
### 6. Environment Issues
|
||||
|
||||
**Virtual Environment Problems**
|
||||
```bash
|
||||
# Reset environment completely
|
||||
rm -rf .zen_venv
|
||||
rm -rf .pal_venv
|
||||
./run-server.sh
|
||||
```
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# WSL (Windows Subsystem for Linux) Setup Guide
|
||||
|
||||
This guide provides detailed instructions for setting up Zen MCP Server on Windows using WSL.
|
||||
This guide provides detailed instructions for setting up PAL MCP Server on Windows using WSL.
|
||||
|
||||
## Prerequisites for WSL
|
||||
|
||||
@@ -27,8 +27,8 @@ npm install -g @anthropic-ai/claude-code
|
||||
cd ~
|
||||
|
||||
# Clone the repository
|
||||
git clone https://github.com/BeehiveInnovations/zen-mcp-server.git
|
||||
cd zen-mcp-server
|
||||
git clone https://github.com/BeehiveInnovations/pal-mcp-server.git
|
||||
cd pal-mcp-server
|
||||
```
|
||||
|
||||
2. **Run the setup script**:
|
||||
@@ -43,7 +43,7 @@ npm install -g @anthropic-ai/claude-code
|
||||
# List configured MCP servers
|
||||
claude mcp list
|
||||
|
||||
# You should see 'zen' listed in the output
|
||||
# You should see 'pal' listed in the output
|
||||
# If not, the setup script will provide the correct configuration
|
||||
```
|
||||
|
||||
@@ -61,19 +61,19 @@ python3 -m pip install --upgrade pip
|
||||
|
||||
### Path Issues
|
||||
|
||||
- Always use the full WSL path for MCP configuration (e.g., `/home/YourName/zen-mcp-server/`)
|
||||
- Always use the full WSL path for MCP configuration (e.g., `/home/YourName/pal-mcp-server/`)
|
||||
- The setup script automatically detects WSL and configures the correct paths
|
||||
|
||||
### Claude Code Connection Issues
|
||||
|
||||
```bash
|
||||
# If Claude Code can't connect to the MCP server, check the configuration
|
||||
cat ~/.claude.json | grep -A 10 "zen"
|
||||
cat ~/.claude.json | grep -A 10 "pal"
|
||||
|
||||
# The configuration should show the correct WSL path to the Python executable
|
||||
# Example: "/home/YourName/zen-mcp-server/.zen_venv/bin/python"
|
||||
# Example: "/home/YourName/pal-mcp-server/.pal_venv/bin/python"
|
||||
```
|
||||
|
||||
### Performance Tip
|
||||
|
||||
For best performance, keep your zen-mcp-server directory in the WSL filesystem (e.g., `~/zen-mcp-server`) rather than in the Windows filesystem (`/mnt/c/...`).
|
||||
For best performance, keep your pal-mcp-server directory in the WSL filesystem (e.g., `~/pal-mcp-server`) rather than in the Windows filesystem (`/mnt/c/...`).
|
||||
Reference in New Issue
Block a user