docs+docker: Enhanced Docker configuration and workflow fixes (#4)
* addinte templates and user guide * up docs * up * up claude.md * add mb * umb * up workflow * up settings claude * adding detailed docs * adding missing files docs * add main readme for docs * up main readme * adding docs for tests * Complete documentation integration with test structure analysis link Adds link to comprehensive test structure documentation in main README.md, finalizing the progressive disclosure strategy for project documentation. This completes the documentation integration work that includes: - Architecture documentation - API reference documentation - Contributing guidelines - Detailed test analysis 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * removing folders from git * up * up * up gitignore * feat: Add automatic semantic versioning workflow - Create GitHub Actions workflow for automatic version bumping based on PR title prefixes - Add version bumping script (scripts/bump_version.py) for programmatic updates - Update PR template with semantic versioning guidelines - Document versioning workflow in contributing guide - Integrate with existing Docker build workflow via git tags This enables automatic version management: - feat: triggers MINOR version bump - fix: triggers PATCH version bump - breaking: triggers MAJOR version bump - docs/chore/test: no version bump 🤖 Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com> * fix: Separate Docker workflows for testing and publishing - Add docker-test.yml for PR validation (build test only) - Fix build_and_publish_docker.yml to trigger only on tags - Remove problematic sha prefix causing invalid tag format - Ensure proper workflow sequence: PR test → merge → version → publish 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * style: Fix black formatting issues in bump_version.py - Fix spacing and indentation to pass black formatter - Ensure code quality standards are met for CI workflow 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * style: Modernize type hints in bump_version.py - Replace typing.Tuple with modern tuple syntax - Remove deprecated typing imports per ruff suggestions - Maintain Python 3.10+ compatibility 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * fix: Remove invalid colon in bash else statement - Fix bash syntax error in auto-version workflow - Remove Python-style colon from else statement - Resolves exit code 127 in version bump determination 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * feat: Add Docker build combinations for non-versioning prefixes - Add support for prefix+docker combinations (docs+docker:, chore+docker:, etc.) - Enable Docker build for non-versioning changes when requested - Add repository_dispatch trigger for Docker workflow - Update Docker tagging for PR-based builds (pr-X, main-sha) - Update PR template with new prefix options This allows contributors to force Docker builds for documentation, maintenance, and other non-versioning changes when needed. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * docs: Add comprehensive PR prefix and automation documentation - Update CONTRIBUTING.md with detailed PR prefix system explanation - Add automation workflow documentation to docs/contributing/workflows.md - Create new user-friendly contributing guide at docs/user-guides/contributing-guide.md - Include Mermaid diagrams for workflow visualization - Document Docker testing combinations and image tagging strategy - Add best practices and common mistakes to avoid This provides clear guidance for contributors on using the automated versioning and Docker build system effectively. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * docs+docker: Complete documentation infrastructure with Docker automation testing (#2) * fix: Remove invalid colon in bash else statement - Fix bash syntax error in auto-version workflow - Remove Python-style colon from else statement - Resolves exit code 127 in version bump determination 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * feat: Add Docker build combinations for non-versioning prefixes - Add support for prefix+docker combinations (docs+docker:, chore+docker:, etc.) - Enable Docker build for non-versioning changes when requested - Add repository_dispatch trigger for Docker workflow - Update Docker tagging for PR-based builds (pr-X, main-sha) - Update PR template with new prefix options This allows contributors to force Docker builds for documentation, maintenance, and other non-versioning changes when needed. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * docs: Add comprehensive PR prefix and automation documentation - Update CONTRIBUTING.md with detailed PR prefix system explanation - Add automation workflow documentation to docs/contributing/workflows.md - Create new user-friendly contributing guide at docs/user-guides/contributing-guide.md - Include Mermaid diagrams for workflow visualization - Document Docker testing combinations and image tagging strategy - Add best practices and common mistakes to avoid This provides clear guidance for contributors on using the automated versioning and Docker build system effectively. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> --------- Co-authored-by: Patryk Ciechanski <patryk.ciechanski@inetum.com> Co-authored-by: Claude <noreply@anthropic.com> * fix: Correct digest reference in Docker artifact attestation - Add id to build step to capture outputs - Fix subject-digest reference from steps.build.outputs.digest - Resolves 'One of subject-path or subject-digest must be provided' error 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * docs: Add comprehensive Docker image usage instructions - Add Option B (Published Docker Image) to main README.md - Update installation guide with published image as fastest option - Add comprehensive configuration examples for GHCR images - Document image tagging strategy (latest, versioned, PR builds) - Include version pinning examples for stability - Highlight benefits: instant setup, no build, cross-platform Users can now choose between: 1. Published image (fastest, no setup) - ghcr.io/patrykiti/gemini-mcp-server:latest 2. Local build (development, customization) - traditional setup 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * feat: Add automated Docker image usage instructions and PR comments - Generate comprehensive usage instructions in workflow summary after Docker build - Include exact docker pull commands with built image tags - Auto-generate Claude Desktop configuration examples - Add automatic PR comments with testing instructions for +docker builds - Show expected image tags (pr-X, main-sha) in PR comments - Include ready-to-use configuration snippets for immediate testing - Link to GitHub Container Registry and Actions for monitoring Now when Docker images are built, users get: - Step-by-step usage instructions in workflow summary - PR comments with exact pull commands and config - Copy-paste ready Claude Desktop configurations - Direct links to monitor build progress 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * feat: Add automatic README.md updating after Docker builds - Updates Docker image references in README.md and documentation files - Automatically commits and pushes changes after image builds - Handles both release builds (version tags) and development builds (PR numbers) - Ensures documentation always references the latest published images - Uses sed pattern matching to update ghcr.io image references 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * correcting * up * fix: GitHub Actions workflows semantic errors Fixed critical semantic and logic errors in auto-version and Docker workflows: Auto-version.yml fixes: - Removed duplicate echo statements for should_build_docker output - Fixed malformed if/else structure (else after else) - Removed redundant conditional blocks for docker: prefixes - Cleaned up duplicate lines in summary generation Build_and_publish_docker.yml fixes: - Replaced hardcoded 'patrykiti' with dynamic ${{ github.repository_owner }} - Enhanced regex pattern to support underscores in Docker tags: [a-zA-Z0-9\._-]* - Fixed sed patterns for dynamic repository owner detection These changes ensure workflows execute correctly and support any repository owner. 🤖 Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com> * docs: Add advanced Docker configuration options to README Added comprehensive configuration section with optional environment variables: Docker Configuration Features: - Advanced configuration example with all available env vars - Complete table of environment variables with descriptions - Practical examples for common configuration scenarios - Clear documentation of config.py options for Docker users Available Configuration Options: - DEFAULT_MODEL: Choose between Pro (quality) vs Flash (speed) - DEFAULT_THINKING_MODE_THINKDEEP: Control token costs with thinking depth - LOG_LEVEL: Debug logging for troubleshooting - MCP_PROJECT_ROOT: Security sandbox for file access - REDIS_URL: Custom Redis configuration Benefits: - Users can customize server behavior without rebuilding images - Better cost control through model and thinking mode selection - Enhanced security through project root restrictions - Improved debugging capabilities with configurable logging - Complete transparency of available configuration options This addresses user request for exposing config.py parameters via Docker environment variables. 🤖 Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com> --------- Co-authored-by: Patryk Ciechanski <patryk.ciechanski@inetum.com> Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
583
docs/api/tools/analyze.md
Normal file
583
docs/api/tools/analyze.md
Normal file
@@ -0,0 +1,583 @@
|
||||
# Analyze Tool API Reference
|
||||
|
||||
## Overview
|
||||
|
||||
The **Analyze Tool** provides comprehensive codebase exploration and understanding capabilities. It's designed for in-depth analysis of existing systems, dependency mapping, pattern detection, and architectural comprehension.
|
||||
|
||||
## Tool Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "analyze",
|
||||
"description": "Code exploration and understanding of existing systems",
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"files": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"description": "Files or directories that might be related to the issue"
|
||||
},
|
||||
"question": {
|
||||
"type": "string",
|
||||
"description": "What to analyze or look for"
|
||||
},
|
||||
"analysis_type": {
|
||||
"type": "string",
|
||||
"enum": ["architecture", "performance", "security", "quality", "general"],
|
||||
"default": "general",
|
||||
"description": "Type of analysis to perform"
|
||||
},
|
||||
"output_format": {
|
||||
"type": "string",
|
||||
"enum": ["summary", "detailed", "actionable"],
|
||||
"default": "detailed",
|
||||
"description": "How to format the output"
|
||||
},
|
||||
"thinking_mode": {
|
||||
"type": "string",
|
||||
"enum": ["minimal", "low", "medium", "high", "max"],
|
||||
"default": "medium",
|
||||
"description": "Thinking depth for analysis"
|
||||
},
|
||||
"temperature": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"default": 0.2,
|
||||
"description": "Temperature for consistency in analysis"
|
||||
},
|
||||
"continuation_id": {
|
||||
"type": "string",
|
||||
"description": "Thread continuation ID for multi-turn conversations",
|
||||
"optional": true
|
||||
}
|
||||
},
|
||||
"required": ["files", "question"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Usage Patterns
|
||||
|
||||
### 1. Architecture Analysis
|
||||
|
||||
**Ideal For**:
|
||||
- Understanding system design patterns
|
||||
- Mapping component relationships
|
||||
- Identifying architectural anti-patterns
|
||||
- Documentation of existing systems
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "analyze",
|
||||
"arguments": {
|
||||
"files": ["/workspace/src/", "/workspace/config/"],
|
||||
"question": "Analyze the overall architecture pattern and component relationships",
|
||||
"analysis_type": "architecture",
|
||||
"thinking_mode": "high",
|
||||
"output_format": "detailed"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Response Includes**:
|
||||
- System architecture overview
|
||||
- Component interaction diagrams
|
||||
- Data flow patterns
|
||||
- Integration points and dependencies
|
||||
- Design pattern identification
|
||||
|
||||
### 2. Performance Analysis
|
||||
|
||||
**Ideal For**:
|
||||
- Identifying performance bottlenecks
|
||||
- Resource usage patterns
|
||||
- Optimization opportunities
|
||||
- Scalability assessment
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "analyze",
|
||||
"arguments": {
|
||||
"files": ["/workspace/api/", "/workspace/database/"],
|
||||
"question": "Identify performance bottlenecks and optimization opportunities",
|
||||
"analysis_type": "performance",
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Response Includes**:
|
||||
- Performance hotspots identification
|
||||
- Resource usage analysis
|
||||
- Caching opportunities
|
||||
- Database query optimization
|
||||
- Concurrency and parallelization suggestions
|
||||
|
||||
### 3. Security Analysis
|
||||
|
||||
**Ideal For**:
|
||||
- Security vulnerability assessment
|
||||
- Authentication/authorization review
|
||||
- Input validation analysis
|
||||
- Secure coding practice evaluation
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "analyze",
|
||||
"arguments": {
|
||||
"files": ["/workspace/auth/", "/workspace/api/"],
|
||||
"question": "Assess security vulnerabilities and authentication patterns",
|
||||
"analysis_type": "security",
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Response Includes**:
|
||||
- Security vulnerability inventory
|
||||
- Authentication mechanism analysis
|
||||
- Input validation assessment
|
||||
- Data exposure risks
|
||||
- Secure coding recommendations
|
||||
|
||||
### 4. Code Quality Analysis
|
||||
|
||||
**Ideal For**:
|
||||
- Code maintainability assessment
|
||||
- Technical debt identification
|
||||
- Refactoring opportunities
|
||||
- Testing coverage evaluation
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "analyze",
|
||||
"arguments": {
|
||||
"files": ["/workspace/src/"],
|
||||
"question": "Evaluate code quality, maintainability, and refactoring needs",
|
||||
"analysis_type": "quality",
|
||||
"thinking_mode": "medium"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Response Includes**:
|
||||
- Code quality metrics
|
||||
- Maintainability assessment
|
||||
- Technical debt inventory
|
||||
- Refactoring prioritization
|
||||
- Testing strategy recommendations
|
||||
|
||||
### 5. Dependency Analysis
|
||||
|
||||
**Ideal For**:
|
||||
- Understanding module dependencies
|
||||
- Circular dependency detection
|
||||
- Third-party library analysis
|
||||
- Dependency graph visualization
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "analyze",
|
||||
"arguments": {
|
||||
"files": ["/workspace/package.json", "/workspace/requirements.txt", "/workspace/src/"],
|
||||
"question": "Map dependencies and identify potential issues",
|
||||
"analysis_type": "general",
|
||||
"output_format": "actionable"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Parameter Details
|
||||
|
||||
### files (required)
|
||||
- **Type**: array of strings
|
||||
- **Purpose**: Specifies which files/directories to analyze
|
||||
- **Behavior**:
|
||||
- **Individual Files**: Direct analysis of specified files
|
||||
- **Directories**: Recursive scanning with intelligent filtering
|
||||
- **Mixed Input**: Combines files and directories in single analysis
|
||||
- **Priority Processing**: Source code files processed before documentation
|
||||
|
||||
**Best Practices**:
|
||||
- Use specific paths for focused analysis
|
||||
- Include configuration files for complete context
|
||||
- Limit scope to relevant components for performance
|
||||
- Use absolute paths for reliability
|
||||
|
||||
### question (required)
|
||||
- **Type**: string
|
||||
- **Purpose**: Defines the analysis focus and expected outcomes
|
||||
- **Effective Question Patterns**:
|
||||
- **Exploratory**: "How does the authentication system work?"
|
||||
- **Diagnostic**: "Why is the API response time slow?"
|
||||
- **Evaluative**: "How maintainable is this codebase?"
|
||||
- **Comparative**: "What are the trade-offs in this design?"
|
||||
|
||||
### analysis_type (optional)
|
||||
- **Type**: string enum
|
||||
- **Default**: "general"
|
||||
- **Purpose**: Tailors analysis approach and output format
|
||||
|
||||
**Analysis Types**:
|
||||
|
||||
**architecture**:
|
||||
- Focus on system design and component relationships
|
||||
- Identifies patterns, anti-patterns, and architectural decisions
|
||||
- Maps data flow and integration points
|
||||
- Evaluates scalability and extensibility
|
||||
|
||||
**performance**:
|
||||
- Identifies bottlenecks and optimization opportunities
|
||||
- Analyzes resource usage and efficiency
|
||||
- Evaluates caching strategies and database performance
|
||||
- Assesses concurrency and parallelization
|
||||
|
||||
**security**:
|
||||
- Vulnerability assessment and threat modeling
|
||||
- Authentication and authorization analysis
|
||||
- Input validation and data protection review
|
||||
- Secure coding practice evaluation
|
||||
|
||||
**quality**:
|
||||
- Code maintainability and readability assessment
|
||||
- Technical debt identification and prioritization
|
||||
- Testing coverage and strategy evaluation
|
||||
- Refactoring opportunity analysis
|
||||
|
||||
**general**:
|
||||
- Balanced analysis covering multiple aspects
|
||||
- Good for initial exploration and broad understanding
|
||||
- Flexible approach adapting to content and question
|
||||
|
||||
### output_format (optional)
|
||||
- **Type**: string enum
|
||||
- **Default**: "detailed"
|
||||
- **Purpose**: Controls response structure and depth
|
||||
|
||||
**Format Types**:
|
||||
|
||||
**summary**:
|
||||
- High-level findings in 2-3 paragraphs
|
||||
- Key insights and primary recommendations
|
||||
- Executive summary style for quick understanding
|
||||
|
||||
**detailed** (recommended):
|
||||
- Comprehensive analysis with examples
|
||||
- Code references with line numbers
|
||||
- Multiple perspectives and alternatives
|
||||
- Actionable recommendations with context
|
||||
|
||||
**actionable**:
|
||||
- Focused on specific next steps
|
||||
- Prioritized recommendations
|
||||
- Implementation guidance
|
||||
- Clear success criteria
|
||||
|
||||
### thinking_mode (optional)
|
||||
- **Type**: string enum
|
||||
- **Default**: "medium"
|
||||
- **Purpose**: Controls analysis depth and computational budget
|
||||
|
||||
**Recommendations by Analysis Scope**:
|
||||
- **low** (2048 tokens): Small files, focused questions
|
||||
- **medium** (8192 tokens): Standard analysis, moderate complexity
|
||||
- **high** (16384 tokens): Comprehensive analysis, complex systems
|
||||
- **max** (32768 tokens): Deep research, critical system analysis
|
||||
|
||||
## Response Format
|
||||
|
||||
### Detailed Analysis Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"content": "# Architecture Analysis Report\n\n## System Overview\n[High-level architecture summary]\n\n## Component Analysis\n[Detailed component breakdown with file references]\n\n## Design Patterns\n[Identified patterns and their implementations]\n\n## Integration Points\n[External dependencies and API interfaces]\n\n## Recommendations\n[Specific improvement suggestions]\n\n## Technical Debt\n[Areas requiring attention]\n\n## Next Steps\n[Prioritized action items]",
|
||||
"metadata": {
|
||||
"analysis_type": "architecture",
|
||||
"files_analyzed": 23,
|
||||
"lines_of_code": 5420,
|
||||
"patterns_identified": ["MVC", "Observer", "Factory"],
|
||||
"complexity_score": "medium",
|
||||
"confidence_level": "high"
|
||||
},
|
||||
"files_processed": [
|
||||
"/workspace/src/main.py:1-150",
|
||||
"/workspace/config/settings.py:1-75"
|
||||
],
|
||||
"continuation_id": "arch-analysis-uuid",
|
||||
"status": "success"
|
||||
}
|
||||
```
|
||||
|
||||
### Code Reference Format
|
||||
|
||||
Analysis responses include precise code references:
|
||||
|
||||
```
|
||||
## Authentication Implementation
|
||||
|
||||
The authentication system uses JWT tokens with RSA256 signing:
|
||||
|
||||
**Token Generation** (`src/auth/jwt_handler.py:45-67`):
|
||||
- RSA private key loading from environment
|
||||
- Token expiration set to 24 hours
|
||||
- User claims include role and permissions
|
||||
|
||||
**Token Validation** (`src/middleware/auth.py:23-41`):
|
||||
- Public key verification
|
||||
- Expiration checking
|
||||
- Role-based access control
|
||||
|
||||
**Security Concerns**:
|
||||
1. No token refresh mechanism (jwt_handler.py:45)
|
||||
2. Hardcoded secret fallback (jwt_handler.py:52)
|
||||
3. Missing rate limiting on auth endpoints (auth.py:15)
|
||||
```
|
||||
|
||||
## Advanced Usage Patterns
|
||||
|
||||
### 1. Progressive Analysis
|
||||
|
||||
**Phase 1: System Overview**
|
||||
```json
|
||||
{
|
||||
"name": "analyze",
|
||||
"arguments": {
|
||||
"files": ["/workspace/"],
|
||||
"question": "Provide high-level architecture overview",
|
||||
"analysis_type": "architecture",
|
||||
"output_format": "summary",
|
||||
"thinking_mode": "low"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Phase 2: Deep Dive**
|
||||
```json
|
||||
{
|
||||
"name": "analyze",
|
||||
"arguments": {
|
||||
"files": ["/workspace/core/", "/workspace/api/"],
|
||||
"question": "Analyze core components and API design in detail",
|
||||
"analysis_type": "architecture",
|
||||
"output_format": "detailed",
|
||||
"thinking_mode": "high",
|
||||
"continuation_id": "overview-analysis-id"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Comparative Analysis
|
||||
|
||||
**Current State Analysis**:
|
||||
```json
|
||||
{
|
||||
"name": "analyze",
|
||||
"arguments": {
|
||||
"files": ["/workspace/legacy/"],
|
||||
"question": "Document current system architecture and limitations",
|
||||
"analysis_type": "architecture"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Target State Analysis**:
|
||||
```json
|
||||
{
|
||||
"name": "analyze",
|
||||
"arguments": {
|
||||
"files": ["/workspace/new-design/"],
|
||||
"question": "Analyze proposed architecture and compare with legacy system",
|
||||
"analysis_type": "architecture",
|
||||
"continuation_id": "current-state-id"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Multi-Perspective Analysis
|
||||
|
||||
**Technical Analysis**:
|
||||
```json
|
||||
{
|
||||
"name": "analyze",
|
||||
"arguments": {
|
||||
"files": ["/workspace/"],
|
||||
"question": "Technical implementation analysis",
|
||||
"analysis_type": "quality",
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Performance Analysis**:
|
||||
```json
|
||||
{
|
||||
"name": "analyze",
|
||||
"arguments": {
|
||||
"files": ["/workspace/"],
|
||||
"question": "Performance characteristics and optimization opportunities",
|
||||
"analysis_type": "performance",
|
||||
"continuation_id": "technical-analysis-id"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Security Analysis**:
|
||||
```json
|
||||
{
|
||||
"name": "analyze",
|
||||
"arguments": {
|
||||
"files": ["/workspace/"],
|
||||
"question": "Security posture and vulnerability assessment",
|
||||
"analysis_type": "security",
|
||||
"continuation_id": "technical-analysis-id"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## File Processing Behavior
|
||||
|
||||
### Directory Processing
|
||||
|
||||
**Recursive Scanning**:
|
||||
- Automatically discovers relevant files in subdirectories
|
||||
- Applies intelligent filtering based on file types
|
||||
- Prioritizes source code over documentation and logs
|
||||
- Respects `.gitignore` patterns when present
|
||||
|
||||
**File Type Prioritization**:
|
||||
1. **Source Code** (.py, .js, .ts, .java, etc.) - 60% of token budget
|
||||
2. **Configuration** (.json, .yaml, .toml, etc.) - 25% of token budget
|
||||
3. **Documentation** (.md, .txt, .rst, etc.) - 10% of token budget
|
||||
4. **Other Files** (.log, .tmp, etc.) - 5% of token budget
|
||||
|
||||
### Content Processing
|
||||
|
||||
**Smart Truncation**:
|
||||
- Preserves file structure and important sections
|
||||
- Maintains code context and comments
|
||||
- Includes file headers and key functions
|
||||
- Adds truncation markers with statistics
|
||||
|
||||
**Line Number References**:
|
||||
- All code examples include precise line numbers
|
||||
- Enables easy navigation to specific locations
|
||||
- Supports IDE integration and quick access
|
||||
- Maintains accuracy across file versions
|
||||
|
||||
## Integration with Other Tools
|
||||
|
||||
### Analyze → ThinkDeep Flow
|
||||
|
||||
```json
|
||||
// 1. Comprehensive analysis
|
||||
{
|
||||
"name": "analyze",
|
||||
"arguments": {
|
||||
"files": ["/workspace/"],
|
||||
"question": "Understand current architecture and identify improvement areas",
|
||||
"analysis_type": "architecture"
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Strategic planning based on findings
|
||||
{
|
||||
"name": "thinkdeep",
|
||||
"arguments": {
|
||||
"current_analysis": "Analysis findings: monolithic architecture with performance bottlenecks...",
|
||||
"focus_areas": ["modernization", "scalability", "migration_strategy"],
|
||||
"continuation_id": "architecture-analysis-id"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Analyze → CodeReview Flow
|
||||
|
||||
```json
|
||||
// 1. System understanding
|
||||
{
|
||||
"name": "analyze",
|
||||
"arguments": {
|
||||
"files": ["/workspace/auth/"],
|
||||
"question": "Understand authentication implementation patterns",
|
||||
"analysis_type": "security"
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Detailed code review
|
||||
{
|
||||
"name": "codereview",
|
||||
"arguments": {
|
||||
"files": ["/workspace/auth/"],
|
||||
"context": "Analysis revealed potential security concerns in authentication",
|
||||
"review_type": "security",
|
||||
"continuation_id": "auth-analysis-id"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Analysis Speed by File Count
|
||||
- **1-10 files**: 2-5 seconds
|
||||
- **11-50 files**: 5-15 seconds
|
||||
- **51-200 files**: 15-45 seconds
|
||||
- **200+ files**: 45-120 seconds (consider breaking into smaller scopes)
|
||||
|
||||
### Memory Usage
|
||||
- **Small projects** (<1MB): ~100MB
|
||||
- **Medium projects** (1-10MB): ~300MB
|
||||
- **Large projects** (10-100MB): ~800MB
|
||||
- **Enterprise projects** (>100MB): May require multiple focused analyses
|
||||
|
||||
### Quality Indicators
|
||||
- **Coverage**: Percentage of files analyzed vs total files
|
||||
- **Depth**: Number of insights per file analyzed
|
||||
- **Accuracy**: Precision of code references and explanations
|
||||
- **Actionability**: Specificity of recommendations
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Effective Analysis Questions
|
||||
|
||||
**Specific and Focused**:
|
||||
```
|
||||
✅ "How does the caching layer integrate with the database access patterns?"
|
||||
✅ "What are the security implications of the current API authentication?"
|
||||
✅ "Where are the performance bottlenecks in the request processing pipeline?"
|
||||
|
||||
❌ "Analyze this code"
|
||||
❌ "Is this good?"
|
||||
❌ "What should I know?"
|
||||
```
|
||||
|
||||
**Context-Rich Questions**:
|
||||
```
|
||||
✅ "Given that we need to scale to 10x current traffic, what are the architectural constraints?"
|
||||
✅ "For a team of junior developers, what are the maintainability concerns?"
|
||||
✅ "Considering SOX compliance requirements, what are the audit trail gaps?"
|
||||
```
|
||||
|
||||
### Scope Management
|
||||
|
||||
1. **Start Broad, Then Focus**: Begin with high-level analysis, drill down to specific areas
|
||||
2. **Logical Grouping**: Analyze related components together for better context
|
||||
3. **Iterative Refinement**: Use continuation to build deeper understanding
|
||||
4. **Balance Depth and Breadth**: Match thinking mode to analysis scope
|
||||
|
||||
### File Selection Strategy
|
||||
|
||||
1. **Core First**: Start with main application files and entry points
|
||||
2. **Configuration Included**: Always include config files for complete context
|
||||
3. **Test Analysis**: Include tests to understand expected behavior
|
||||
4. **Documentation Review**: Add docs to understand intended design
|
||||
|
||||
---
|
||||
|
||||
The Analyze Tool serves as your code comprehension partner, providing deep insights into existing systems and enabling informed decision-making for development and modernization efforts.
|
||||
353
docs/api/tools/chat.md
Normal file
353
docs/api/tools/chat.md
Normal file
@@ -0,0 +1,353 @@
|
||||
# Chat Tool API Reference
|
||||
|
||||
## Overview
|
||||
|
||||
The **Chat Tool** provides immediate access to Gemini's conversational capabilities for quick questions, brainstorming sessions, and general collaboration. It's designed for rapid iteration and exploration of ideas without the computational overhead of deeper analysis tools.
|
||||
|
||||
## Tool Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "chat",
|
||||
"description": "Quick questions, brainstorming, simple code snippets",
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"prompt": {
|
||||
"type": "string",
|
||||
"description": "Your question, topic, or current thinking to discuss with Gemini"
|
||||
},
|
||||
"continuation_id": {
|
||||
"type": "string",
|
||||
"description": "Thread continuation ID for multi-turn conversations",
|
||||
"optional": true
|
||||
},
|
||||
"temperature": {
|
||||
"type": "number",
|
||||
"description": "Response creativity (0-1, default 0.5)",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"default": 0.5
|
||||
},
|
||||
"thinking_mode": {
|
||||
"type": "string",
|
||||
"description": "Thinking depth: minimal|low|medium|high|max",
|
||||
"enum": ["minimal", "low", "medium", "high", "max"],
|
||||
"default": "medium"
|
||||
},
|
||||
"files": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"description": "Optional files for context (must be absolute paths)",
|
||||
"optional": true
|
||||
}
|
||||
},
|
||||
"required": ["prompt"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Usage Patterns
|
||||
|
||||
### 1. Quick Questions
|
||||
|
||||
**Ideal For**:
|
||||
- Clarifying concepts or terminology
|
||||
- Getting immediate explanations
|
||||
- Understanding code snippets
|
||||
- Exploring ideas rapidly
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "chat",
|
||||
"arguments": {
|
||||
"prompt": "What's the difference between async and await in Python?",
|
||||
"thinking_mode": "low"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Brainstorming Sessions
|
||||
|
||||
**Ideal For**:
|
||||
- Generating multiple solution approaches
|
||||
- Exploring design alternatives
|
||||
- Creative problem solving
|
||||
- Architecture discussions
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "chat",
|
||||
"arguments": {
|
||||
"prompt": "I need to design a caching layer for my MCP server. What are some approaches I should consider?",
|
||||
"temperature": 0.7,
|
||||
"thinking_mode": "medium"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Code Discussions
|
||||
|
||||
**Ideal For**:
|
||||
- Reviewing small code snippets
|
||||
- Understanding implementation patterns
|
||||
- Getting quick feedback
|
||||
- Exploring API designs
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "chat",
|
||||
"arguments": {
|
||||
"prompt": "Review this error handling pattern and suggest improvements",
|
||||
"files": ["/workspace/utils/error_handling.py"],
|
||||
"thinking_mode": "medium"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Multi-Turn Conversations
|
||||
|
||||
**Ideal For**:
|
||||
- Building on previous discussions
|
||||
- Iterative refinement of ideas
|
||||
- Context-aware follow-ups
|
||||
- Continuous collaboration
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "chat",
|
||||
"arguments": {
|
||||
"prompt": "Based on our previous discussion about caching, how would you implement cache invalidation?",
|
||||
"continuation_id": "550e8400-e29b-41d4-a716-446655440000"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Parameter Details
|
||||
|
||||
### prompt (required)
|
||||
- **Type**: string
|
||||
- **Purpose**: The main input for Gemini to process
|
||||
- **Best Practices**:
|
||||
- Be specific and clear about what you need
|
||||
- Include relevant context in the prompt itself
|
||||
- Ask focused questions for better responses
|
||||
- Use conversational language for brainstorming
|
||||
|
||||
### continuation_id (optional)
|
||||
- **Type**: string (UUID format)
|
||||
- **Purpose**: Links to previous conversation context
|
||||
- **Behavior**:
|
||||
- If provided, loads conversation history from Redis
|
||||
- Maintains context across multiple tool calls
|
||||
- Enables follow-up questions and refinement
|
||||
- Automatically generated on first call if not provided
|
||||
|
||||
### temperature (optional)
|
||||
- **Type**: number (0.0 - 1.0)
|
||||
- **Default**: 0.5
|
||||
- **Purpose**: Controls response creativity and variability
|
||||
- **Guidelines**:
|
||||
- **0.0-0.3**: Focused, deterministic responses (technical questions)
|
||||
- **0.4-0.6**: Balanced creativity and accuracy (general discussion)
|
||||
- **0.7-1.0**: High creativity (brainstorming, exploration)
|
||||
|
||||
### thinking_mode (optional)
|
||||
- **Type**: string enum
|
||||
- **Default**: "medium"
|
||||
- **Purpose**: Controls computational budget for analysis depth
|
||||
- **Options**:
|
||||
- **minimal** (128 tokens): Quick yes/no, simple clarifications
|
||||
- **low** (2048 tokens): Basic explanations, straightforward questions
|
||||
- **medium** (8192 tokens): Standard discussions, moderate complexity
|
||||
- **high** (16384 tokens): Deep explanations, complex topics
|
||||
- **max** (32768 tokens): Maximum depth, research-level discussions
|
||||
|
||||
### files (optional)
|
||||
- **Type**: array of strings
|
||||
- **Purpose**: Provides file context for discussion
|
||||
- **Constraints**:
|
||||
- Must be absolute paths
|
||||
- Subject to sandbox validation (PROJECT_ROOT)
|
||||
- Limited to 50 files per request
|
||||
- Total content limited by thinking_mode token budget
|
||||
|
||||
## Response Format
|
||||
|
||||
### Standard Response Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"content": "Main response content...",
|
||||
"metadata": {
|
||||
"thinking_mode": "medium",
|
||||
"temperature": 0.5,
|
||||
"tokens_used": 2156,
|
||||
"response_time": "1.2s",
|
||||
"files_processed": 3
|
||||
},
|
||||
"continuation_id": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"files_processed": [
|
||||
"/workspace/utils/error_handling.py"
|
||||
],
|
||||
"status": "success"
|
||||
}
|
||||
```
|
||||
|
||||
### Response Content Types
|
||||
|
||||
**Explanatory Responses**:
|
||||
- Clear, structured explanations
|
||||
- Step-by-step breakdowns
|
||||
- Code examples with annotations
|
||||
- Concept comparisons and contrasts
|
||||
|
||||
**Brainstorming Responses**:
|
||||
- Multiple approach options
|
||||
- Pros/cons analysis
|
||||
- Creative alternatives
|
||||
- Implementation considerations
|
||||
|
||||
**Code Discussion Responses**:
|
||||
- Specific line-by-line feedback
|
||||
- Pattern recognition and naming
|
||||
- Improvement suggestions
|
||||
- Best practice recommendations
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
|
||||
**Invalid Temperature**:
|
||||
```json
|
||||
{
|
||||
"error": "Invalid temperature value: 1.5. Must be between 0.0 and 1.0"
|
||||
}
|
||||
```
|
||||
|
||||
**File Access Error**:
|
||||
```json
|
||||
{
|
||||
"error": "File access denied: /etc/passwd. Path outside project sandbox."
|
||||
}
|
||||
```
|
||||
|
||||
**Token Limit Exceeded**:
|
||||
```json
|
||||
{
|
||||
"error": "Content exceeds token limit for thinking_mode 'low'. Consider using 'medium' or 'high'."
|
||||
}
|
||||
```
|
||||
|
||||
### Error Recovery Strategies
|
||||
|
||||
1. **Parameter Validation**: Adjust invalid parameters to acceptable ranges
|
||||
2. **File Filtering**: Remove inaccessible files and continue with available ones
|
||||
3. **Token Management**: Truncate large content while preserving structure
|
||||
4. **Graceful Degradation**: Provide partial responses when possible
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Response Times
|
||||
- **minimal mode**: ~0.5-1s (simple questions)
|
||||
- **low mode**: ~1-2s (basic explanations)
|
||||
- **medium mode**: ~2-4s (standard discussions)
|
||||
- **high mode**: ~4-8s (deep analysis)
|
||||
- **max mode**: ~8-15s (research-level)
|
||||
|
||||
### Resource Usage
|
||||
- **Memory**: ~50-200MB per conversation thread
|
||||
- **Network**: Minimal (only Gemini API calls)
|
||||
- **Storage**: Redis conversation persistence (24h TTL)
|
||||
- **CPU**: Low (primarily I/O bound)
|
||||
|
||||
### Optimization Tips
|
||||
|
||||
1. **Use Appropriate Thinking Mode**: Don't over-engineer simple questions
|
||||
2. **Leverage Continuation**: Build on previous context rather than repeating
|
||||
3. **Focus Prompts**: Specific questions get better responses
|
||||
4. **Batch Related Questions**: Use conversation threading for related topics
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Effective Prompting
|
||||
|
||||
**Good Examples**:
|
||||
```
|
||||
"Explain the trade-offs between Redis and in-memory caching for an MCP server"
|
||||
"Help me brainstorm error handling strategies for async file operations"
|
||||
"What are the security implications of this authentication pattern?"
|
||||
```
|
||||
|
||||
**Avoid**:
|
||||
```
|
||||
"Help me" (too vague)
|
||||
"Fix this code" (without context)
|
||||
"What should I do?" (open-ended without scope)
|
||||
```
|
||||
|
||||
### Conversation Management
|
||||
|
||||
1. **Use Continuation IDs**: Maintain context across related discussions
|
||||
2. **Logical Grouping**: Keep related topics in same conversation thread
|
||||
3. **Clear Transitions**: Explicitly state when changing topics
|
||||
4. **Context Refresh**: Occasionally summarize progress in long conversations
|
||||
|
||||
### File Usage
|
||||
|
||||
1. **Relevant Files Only**: Include only files directly related to discussion
|
||||
2. **Prioritize Source Code**: Code files provide more value than logs
|
||||
3. **Reasonable Scope**: 5-10 files maximum for focused discussions
|
||||
4. **Absolute Paths**: Always use full paths for reliability
|
||||
|
||||
## Integration Examples
|
||||
|
||||
### With Other Tools
|
||||
|
||||
**Chat → Analyze Flow**:
|
||||
```json
|
||||
// 1. Quick discussion
|
||||
{"name": "chat", "arguments": {"prompt": "Should I refactor this module?"}}
|
||||
|
||||
// 2. Deep analysis based on chat insights
|
||||
{"name": "analyze", "arguments": {
|
||||
"files": ["/workspace/module.py"],
|
||||
"question": "Analyze refactoring opportunities based on maintainability",
|
||||
"continuation_id": "previous-chat-thread-id"
|
||||
}}
|
||||
```
|
||||
|
||||
**Chat → ThinkDeep Flow**:
|
||||
```json
|
||||
// 1. Initial exploration
|
||||
{"name": "chat", "arguments": {"prompt": "I need to scale my API to handle 1000 RPS"}}
|
||||
|
||||
// 2. Strategic planning
|
||||
{"name": "thinkdeep", "arguments": {
|
||||
"current_analysis": "Need to scale API to 1000 RPS",
|
||||
"focus_areas": ["performance", "architecture", "caching"],
|
||||
"continuation_id": "previous-chat-thread-id"
|
||||
}}
|
||||
```
|
||||
|
||||
### Workflow Integration
|
||||
|
||||
**Development Workflow**:
|
||||
1. **Chat**: Quick question about implementation approach
|
||||
2. **Analyze**: Deep dive into existing codebase
|
||||
3. **Chat**: Discussion of findings and next steps
|
||||
4. **CodeReview**: Quality validation of changes
|
||||
|
||||
**Learning Workflow**:
|
||||
1. **Chat**: Ask about unfamiliar concepts
|
||||
2. **Chat**: Request examples and clarifications
|
||||
3. **Chat**: Discuss practical applications
|
||||
4. **Analyze**: Study real codebase examples
|
||||
|
||||
---
|
||||
|
||||
The Chat Tool serves as the primary interface for rapid AI collaboration, providing immediate access to Gemini's knowledge while maintaining conversation context and enabling seamless integration with deeper analysis tools.
|
||||
418
docs/api/tools/codereview.md
Normal file
418
docs/api/tools/codereview.md
Normal file
@@ -0,0 +1,418 @@
|
||||
# CodeReview Tool API Reference
|
||||
|
||||
## Overview
|
||||
|
||||
The **CodeReview Tool** provides comprehensive code quality, security, and bug detection analysis. Based on Gemini's deep analytical capabilities, it performs systematic code review with severity-based issue categorization and specific fix recommendations.
|
||||
|
||||
## Tool Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "codereview",
|
||||
"description": "Code quality, security, bug detection",
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"files": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"description": "Code files or directories to review"
|
||||
},
|
||||
"context": {
|
||||
"type": "string",
|
||||
"description": "User's summary of what the code does, expected behavior, constraints, and review objectives"
|
||||
},
|
||||
"review_type": {
|
||||
"type": "string",
|
||||
"enum": ["full", "security", "performance", "quick"],
|
||||
"default": "full",
|
||||
"description": "Type of review to perform"
|
||||
},
|
||||
"severity_filter": {
|
||||
"type": "string",
|
||||
"enum": ["critical", "high", "medium", "all"],
|
||||
"default": "all",
|
||||
"description": "Minimum severity level to report"
|
||||
},
|
||||
"standards": {
|
||||
"type": "string",
|
||||
"description": "Coding standards to enforce",
|
||||
"optional": true
|
||||
},
|
||||
"thinking_mode": {
|
||||
"type": "string",
|
||||
"enum": ["minimal", "low", "medium", "high", "max"],
|
||||
"default": "medium",
|
||||
"description": "Thinking depth for analysis"
|
||||
},
|
||||
"temperature": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"default": 0.2,
|
||||
"description": "Temperature for consistency in analysis"
|
||||
},
|
||||
"continuation_id": {
|
||||
"type": "string",
|
||||
"description": "Thread continuation ID for multi-turn conversations",
|
||||
"optional": true
|
||||
}
|
||||
},
|
||||
"required": ["files", "context"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Review Types
|
||||
|
||||
### 1. Full Review (default)
|
||||
|
||||
**Comprehensive analysis covering**:
|
||||
- **Security**: Vulnerability detection, authentication flaws, input validation
|
||||
- **Performance**: Bottlenecks, resource usage, optimization opportunities
|
||||
- **Quality**: Maintainability, readability, technical debt
|
||||
- **Bugs**: Logic errors, edge cases, exception handling
|
||||
- **Standards**: Coding conventions, best practices, style consistency
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "codereview",
|
||||
"arguments": {
|
||||
"files": ["/workspace/src/auth/", "/workspace/src/api/"],
|
||||
"context": "Authentication and API modules for user management system. Handles JWT tokens, password hashing, and role-based access control.",
|
||||
"review_type": "full",
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Security Review
|
||||
|
||||
**Focused security assessment**:
|
||||
- **Authentication**: Token handling, session management, password security
|
||||
- **Authorization**: Access controls, privilege escalation, RBAC implementation
|
||||
- **Input Validation**: SQL injection, XSS, command injection vulnerabilities
|
||||
- **Data Protection**: Encryption, sensitive data exposure, logging security
|
||||
- **Configuration**: Security headers, SSL/TLS, environment variables
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "codereview",
|
||||
"arguments": {
|
||||
"files": ["/workspace/auth/", "/workspace/middleware/"],
|
||||
"context": "Security review for production deployment. System handles PII data and financial transactions.",
|
||||
"review_type": "security",
|
||||
"severity_filter": "high",
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Performance Review
|
||||
|
||||
**Performance-focused analysis**:
|
||||
- **Algorithms**: Time/space complexity, optimization opportunities
|
||||
- **Database**: Query efficiency, N+1 problems, indexing strategies
|
||||
- **Caching**: Cache utilization, invalidation strategies, cache stampede
|
||||
- **Concurrency**: Thread safety, deadlocks, race conditions
|
||||
- **Resource Management**: Memory leaks, connection pooling, file handling
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "codereview",
|
||||
"arguments": {
|
||||
"files": ["/workspace/api/", "/workspace/database/"],
|
||||
"context": "API layer experiencing high latency under load. Database queries taking 2-5 seconds average.",
|
||||
"review_type": "performance",
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Quick Review
|
||||
|
||||
**Rapid assessment focusing on**:
|
||||
- **Critical Issues**: Severe bugs and security vulnerabilities only
|
||||
- **Code Smells**: Obvious anti-patterns and maintainability issues
|
||||
- **Quick Wins**: Easy-to-fix improvements with high impact
|
||||
- **Standards**: Basic coding convention violations
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "codereview",
|
||||
"arguments": {
|
||||
"files": ["/workspace/feature/new-payment-flow.py"],
|
||||
"context": "Quick review of new payment processing feature before merge",
|
||||
"review_type": "quick",
|
||||
"severity_filter": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Severity Classification
|
||||
|
||||
### Critical Issues
|
||||
- **Security vulnerabilities** with immediate exploitation risk
|
||||
- **Data corruption** or loss potential
|
||||
- **System crashes** or availability impacts
|
||||
- **Compliance violations** (GDPR, SOX, HIPAA)
|
||||
|
||||
**Example Finding**:
|
||||
```
|
||||
🔴 CRITICAL - SQL Injection Vulnerability
|
||||
File: api/users.py:45
|
||||
Code: f"SELECT * FROM users WHERE id = {user_id}"
|
||||
Impact: Complete database compromise possible
|
||||
Fix: Use parameterized queries: cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))
|
||||
```
|
||||
|
||||
### High Severity Issues
|
||||
- **Authentication bypasses** or privilege escalation
|
||||
- **Performance bottlenecks** affecting user experience
|
||||
- **Logic errors** in critical business flows
|
||||
- **Resource leaks** causing system degradation
|
||||
|
||||
**Example Finding**:
|
||||
```
|
||||
🟠 HIGH - Authentication Bypass
|
||||
File: middleware/auth.py:23
|
||||
Code: if token and jwt.decode(token, verify=False):
|
||||
Impact: JWT signature verification disabled
|
||||
Fix: Enable verification: jwt.decode(token, secret_key, algorithms=["HS256"])
|
||||
```
|
||||
|
||||
### Medium Severity Issues
|
||||
- **Code maintainability** problems
|
||||
- **Minor security** hardening opportunities
|
||||
- **Performance optimizations** for better efficiency
|
||||
- **Error handling** improvements
|
||||
|
||||
**Example Finding**:
|
||||
```
|
||||
🟡 MEDIUM - Error Information Disclosure
|
||||
File: api/auth.py:67
|
||||
Code: return {"error": str(e)}
|
||||
Impact: Sensitive error details exposed to clients
|
||||
Fix: Log full error, return generic message: logger.error(str(e)); return {"error": "Authentication failed"}
|
||||
```
|
||||
|
||||
### Low Severity Issues
|
||||
- **Code style** and convention violations
|
||||
- **Documentation** gaps
|
||||
- **Minor optimizations** with minimal impact
|
||||
- **Code duplication** opportunities
|
||||
|
||||
## Response Format
|
||||
|
||||
### Structured Review Report
|
||||
|
||||
```json
|
||||
{
|
||||
"content": "# Code Review Report\n\n## Executive Summary\n- **Files Reviewed**: 12\n- **Issues Found**: 23 (3 Critical, 7 High, 9 Medium, 4 Low)\n- **Overall Quality**: Moderate - Requires attention before production\n\n## Critical Issues (3)\n\n### 🔴 SQL Injection in User Query\n**File**: `api/users.py:45`\n**Severity**: Critical\n**Issue**: Unsafe string interpolation in SQL query\n```python\n# Current (vulnerable)\nquery = f\"SELECT * FROM users WHERE id = {user_id}\"\n\n# Fixed (secure)\nquery = \"SELECT * FROM users WHERE id = %s\"\ncursor.execute(query, (user_id,))\n```\n**Impact**: Complete database compromise\n**Priority**: Fix immediately\n\n## Security Assessment\n- Authentication mechanism: JWT with proper signing ✅\n- Input validation: Missing in 3 endpoints ❌\n- Error handling: Overly verbose error messages ❌\n\n## Performance Analysis\n- Database queries: 2 N+1 query problems identified\n- Caching: No caching layer implemented\n- Algorithm efficiency: Sorting algorithm in user_search O(n²)\n\n## Recommendations\n1. **Immediate**: Fix critical SQL injection vulnerabilities\n2. **Short-term**: Implement input validation middleware\n3. **Medium-term**: Add caching layer for frequently accessed data\n4. **Long-term**: Refactor sorting algorithms for better performance",
|
||||
"metadata": {
|
||||
"review_type": "full",
|
||||
"files_reviewed": 12,
|
||||
"lines_of_code": 3420,
|
||||
"issues_by_severity": {
|
||||
"critical": 3,
|
||||
"high": 7,
|
||||
"medium": 9,
|
||||
"low": 4
|
||||
},
|
||||
"security_score": 6.5,
|
||||
"maintainability_score": 7.2,
|
||||
"performance_score": 5.8,
|
||||
"overall_quality": "moderate"
|
||||
},
|
||||
"continuation_id": "review-550e8400",
|
||||
"status": "success"
|
||||
}
|
||||
```
|
||||
|
||||
### Issue Categorization
|
||||
|
||||
**Security Issues**:
|
||||
- Authentication and authorization flaws
|
||||
- Input validation vulnerabilities
|
||||
- Data exposure and privacy concerns
|
||||
- Cryptographic implementation errors
|
||||
|
||||
**Performance Issues**:
|
||||
- Algorithm inefficiencies
|
||||
- Database optimization opportunities
|
||||
- Memory and resource management
|
||||
- Concurrency and scaling concerns
|
||||
|
||||
**Quality Issues**:
|
||||
- Code maintainability problems
|
||||
- Technical debt accumulation
|
||||
- Testing coverage gaps
|
||||
- Documentation deficiencies
|
||||
|
||||
**Bug Issues**:
|
||||
- Logic errors and edge cases
|
||||
- Exception handling problems
|
||||
- Race conditions and timing issues
|
||||
- Integration and compatibility problems
|
||||
|
||||
## Advanced Usage Patterns
|
||||
|
||||
### 1. Pre-Commit Review
|
||||
|
||||
**Before committing changes**:
|
||||
```json
|
||||
{
|
||||
"name": "codereview",
|
||||
"arguments": {
|
||||
"files": ["/workspace/modified_files.txt"],
|
||||
"context": "Pre-commit review of changes for user authentication feature",
|
||||
"review_type": "full",
|
||||
"severity_filter": "medium",
|
||||
"standards": "PEP 8, security-first coding practices"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Security Audit
|
||||
|
||||
**Comprehensive security assessment**:
|
||||
```json
|
||||
{
|
||||
"name": "codereview",
|
||||
"arguments": {
|
||||
"files": ["/workspace/"],
|
||||
"context": "Security audit for SOC 2 compliance. System processes payment data and PII.",
|
||||
"review_type": "security",
|
||||
"severity_filter": "critical",
|
||||
"thinking_mode": "max",
|
||||
"standards": "OWASP Top 10, PCI DSS requirements"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Performance Optimization
|
||||
|
||||
**Performance-focused review**:
|
||||
```json
|
||||
{
|
||||
"name": "codereview",
|
||||
"arguments": {
|
||||
"files": ["/workspace/api/", "/workspace/database/"],
|
||||
"context": "API response times increased 300% with scale. Need performance optimization.",
|
||||
"review_type": "performance",
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Legacy Code Assessment
|
||||
|
||||
**Technical debt evaluation**:
|
||||
```json
|
||||
{
|
||||
"name": "codereview",
|
||||
"arguments": {
|
||||
"files": ["/workspace/legacy/"],
|
||||
"context": "Legacy system modernization assessment. Code is 5+ years old, limited documentation.",
|
||||
"review_type": "full",
|
||||
"thinking_mode": "high",
|
||||
"standards": "Modern Python practices, type hints, async patterns"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Integration with CLAUDE.md Collaboration
|
||||
|
||||
### Double Validation Protocol
|
||||
|
||||
**Primary Analysis** (Gemini):
|
||||
```json
|
||||
{
|
||||
"name": "codereview",
|
||||
"arguments": {
|
||||
"files": ["/workspace/security/"],
|
||||
"context": "Security-critical authentication module review",
|
||||
"review_type": "security",
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Adversarial Review** (Claude):
|
||||
- Challenge findings and look for edge cases
|
||||
- Validate assumptions about security implications
|
||||
- Cross-reference with security best practices
|
||||
- Identify potential false positives or missed issues
|
||||
|
||||
### Memory-Driven Context
|
||||
|
||||
**Context Retrieval**:
|
||||
```python
|
||||
# Before review, query memory for related context
|
||||
previous_findings = memory.search_nodes("security review authentication")
|
||||
architectural_decisions = memory.search_nodes("authentication architecture")
|
||||
```
|
||||
|
||||
**Findings Storage**:
|
||||
```python
|
||||
# Store review findings for future reference
|
||||
memory.create_entities([{
|
||||
"name": "Security Review - Authentication Module",
|
||||
"entityType": "quality_records",
|
||||
"observations": ["3 critical vulnerabilities found", "JWT implementation secure", "Input validation missing"]
|
||||
}])
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Effective Context Provision
|
||||
|
||||
**Comprehensive Context**:
|
||||
```json
|
||||
{
|
||||
"context": "E-commerce checkout flow handling payment processing. Requirements: PCI DSS compliance, 99.9% uptime, <200ms response time. Known issues: occasional payment failures under high load. Recent changes: added new payment provider integration. Team: 3 senior, 2 junior developers. Timeline: Production deployment in 2 weeks."
|
||||
}
|
||||
```
|
||||
|
||||
**Technical Context**:
|
||||
```json
|
||||
{
|
||||
"context": "Microservice architecture with Docker containers. Tech stack: Python 3.9, FastAPI, PostgreSQL, Redis. Load balancer: NGINX. Monitoring: Prometheus/Grafana. Authentication: OAuth 2.0 with JWT. Expected load: 1000 RPS peak."
|
||||
}
|
||||
```
|
||||
|
||||
### Review Scope Management
|
||||
|
||||
1. **Start with Critical Paths**: Review security and performance-critical code first
|
||||
2. **Incremental Reviews**: Review code in logical chunks rather than entire codebase
|
||||
3. **Context-Aware**: Always provide business context and technical constraints
|
||||
4. **Follow-up Reviews**: Use continuation for iterative improvement tracking
|
||||
|
||||
### Issue Prioritization
|
||||
|
||||
1. **Security First**: Address critical security issues immediately
|
||||
2. **Business Impact**: Prioritize issues affecting user experience or revenue
|
||||
3. **Technical Debt**: Balance new features with technical debt reduction
|
||||
4. **Team Capacity**: Consider team skills and available time for fixes
|
||||
|
||||
### Quality Gates
|
||||
|
||||
**Pre-Commit Gates**:
|
||||
- No critical or high severity issues
|
||||
- All security vulnerabilities addressed
|
||||
- Performance regressions identified and planned
|
||||
- Code style and standards compliance
|
||||
|
||||
**Pre-Production Gates**:
|
||||
- Comprehensive security review completed
|
||||
- Performance benchmarks met
|
||||
- Documentation updated
|
||||
- Monitoring and alerting configured
|
||||
|
||||
---
|
||||
|
||||
The CodeReview Tool provides systematic, thorough code analysis that integrates seamlessly with development workflows while maintaining high standards for security, performance, and maintainability.
|
||||
408
docs/api/tools/debug.md
Normal file
408
docs/api/tools/debug.md
Normal file
@@ -0,0 +1,408 @@
|
||||
# Debug Tool API Reference
|
||||
|
||||
## Overview
|
||||
|
||||
The **Debug Tool** provides expert-level debugging and root cause analysis capabilities. Leveraging Gemini's analytical power, it systematically investigates errors, analyzes stack traces, and provides comprehensive debugging strategies with 1M token capacity for handling large diagnostic files.
|
||||
|
||||
## Tool Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "debug",
|
||||
"description": "Root cause analysis, error investigation",
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"error_description": {
|
||||
"type": "string",
|
||||
"description": "Error message, symptoms, or issue description"
|
||||
},
|
||||
"error_context": {
|
||||
"type": "string",
|
||||
"description": "Stack trace, logs, or additional error context",
|
||||
"optional": true
|
||||
},
|
||||
"files": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"description": "Files or directories that might be related to the issue",
|
||||
"optional": true
|
||||
},
|
||||
"previous_attempts": {
|
||||
"type": "string",
|
||||
"description": "What has been tried already",
|
||||
"optional": true
|
||||
},
|
||||
"runtime_info": {
|
||||
"type": "string",
|
||||
"description": "Environment, versions, or runtime information",
|
||||
"optional": true
|
||||
},
|
||||
"thinking_mode": {
|
||||
"type": "string",
|
||||
"enum": ["minimal", "low", "medium", "high", "max"],
|
||||
"default": "medium",
|
||||
"description": "Thinking depth for analysis"
|
||||
},
|
||||
"temperature": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"default": 0.2,
|
||||
"description": "Temperature for accuracy in debugging"
|
||||
},
|
||||
"continuation_id": {
|
||||
"type": "string",
|
||||
"description": "Thread continuation ID for multi-turn conversations",
|
||||
"optional": true
|
||||
}
|
||||
},
|
||||
"required": ["error_description"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Debugging Capabilities
|
||||
|
||||
### 1. Stack Trace Analysis
|
||||
|
||||
**Multi-language stack trace parsing and analysis**:
|
||||
- **Python**: Exception hierarchies, traceback analysis, module resolution
|
||||
- **JavaScript**: Error objects, async stack traces, source map support
|
||||
- **Java**: Exception chains, thread dumps, JVM analysis
|
||||
- **C/C++**: Core dumps, segmentation faults, memory corruption
|
||||
- **Go**: Panic analysis, goroutine dumps, race condition detection
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "debug",
|
||||
"arguments": {
|
||||
"error_description": "Application crashes with segmentation fault during user login",
|
||||
"error_context": "Traceback (most recent call last):\n File \"/app/auth/login.py\", line 45, in authenticate_user\n result = hash_password(password)\n File \"/app/utils/crypto.py\", line 23, in hash_password\n return bcrypt.hashpw(password.encode(), salt)\nSegmentationFault: 11",
|
||||
"files": ["/workspace/auth/", "/workspace/utils/crypto.py"],
|
||||
"runtime_info": "Python 3.9.7, bcrypt 3.2.0, Ubuntu 20.04, Docker container"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Performance Issue Investigation
|
||||
|
||||
**Systematic performance debugging**:
|
||||
- **Memory Leaks**: Heap analysis, reference tracking, garbage collection
|
||||
- **CPU Bottlenecks**: Profiling data analysis, hot path identification
|
||||
- **I/O Problems**: Database queries, file operations, network latency
|
||||
- **Concurrency Issues**: Deadlocks, race conditions, thread contention
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "debug",
|
||||
"arguments": {
|
||||
"error_description": "API response time degraded from 200ms to 5-10 seconds after recent deployment",
|
||||
"error_context": "Memory usage climbing steadily. No obvious errors in logs. CPU usage normal.",
|
||||
"files": ["/workspace/api/", "/workspace/database/queries.py"],
|
||||
"previous_attempts": "Restarted services, checked database indexes, reviewed recent code changes",
|
||||
"runtime_info": "FastAPI 0.68.0, PostgreSQL 13, Redis 6.2, K8s deployment"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Integration & Configuration Issues
|
||||
|
||||
**System integration debugging**:
|
||||
- **Database Connections**: Connection pooling, timeout issues, authentication
|
||||
- **External APIs**: Network connectivity, authentication, rate limiting
|
||||
- **Configuration**: Environment variables, file permissions, service discovery
|
||||
- **Deployment**: Container issues, orchestration problems, resource constraints
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "debug",
|
||||
"arguments": {
|
||||
"error_description": "Microservice intermittently fails to connect to database with 'connection timeout' errors",
|
||||
"error_context": "Error occurs approximately every 10-15 minutes. Database is accessible from other services. Connection pool shows available connections.",
|
||||
"files": ["/workspace/config/database.py", "/workspace/docker-compose.yml"],
|
||||
"runtime_info": "Docker Compose, PostgreSQL 13 in separate container, connection pool size: 20"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Logic & Business Rule Errors
|
||||
|
||||
**Business logic debugging**:
|
||||
- **Edge Cases**: Boundary conditions, null handling, empty collections
|
||||
- **State Management**: Inconsistent state, race conditions, data integrity
|
||||
- **Algorithm Issues**: Incorrect implementations, complexity problems
|
||||
- **Data Flow**: Transformation errors, validation failures, format issues
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "debug",
|
||||
"arguments": {
|
||||
"error_description": "E-commerce cart total calculation occasionally shows incorrect amounts during checkout",
|
||||
"error_context": "Users report seeing different totals on cart page vs checkout page. Issue seems random but affects ~2% of transactions.",
|
||||
"files": ["/workspace/cart/", "/workspace/pricing/"],
|
||||
"previous_attempts": "Added logging to cart calculations, reviewed tax calculation logic",
|
||||
"runtime_info": "High-traffic e-commerce site, multiple discount types, international tax rules"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Response Format
|
||||
|
||||
### Comprehensive Debug Report
|
||||
|
||||
```json
|
||||
{
|
||||
"content": "# Debug Analysis Report\n\n## Problem Summary\n**Issue**: Segmentation fault during user authentication\n**Severity**: Critical - Application crashes\n**Scope**: Authentication module, affects all user logins\n\n## Root Cause Analysis\n\n### Primary Cause\n**Buffer overflow in bcrypt library** (`utils/crypto.py:23`)\n\n**Technical Details**:\n- bcrypt 3.2.0 has known vulnerability CVE-2023-xxxxx\n- `hash_password()` function passes unvalidated user input\n- Long passwords (>72 chars) trigger buffer overflow\n- Crash occurs in C extension, not caught by Python exception handling\n\n### Contributing Factors\n1. **Missing Input Validation** (`auth/login.py:42`)\n - No password length validation before hashing\n - No sanitization of special characters\n\n2. **Outdated Dependencies** (`requirements.txt:15`)\n - bcrypt 3.2.0 (vulnerable) vs 4.0.1 (patched)\n - No automated dependency scanning\n\n## Evidence Analysis\n\n### Stack Trace Breakdown\n```\nFile \"/app/auth/login.py\", line 45, in authenticate_user\n result = hash_password(password)\n```\n**Analysis**: Call originates from user input processing\n\n```\nFile \"/app/utils/crypto.py\", line 23, in hash_password\n return bcrypt.hashpw(password.encode(), salt)\n```\n**Analysis**: Crash occurs in bcrypt native code, indicates memory corruption\n\n### Code Analysis\n**Vulnerable Code** (`utils/crypto.py:20-25`):\n```python\ndef hash_password(password: str) -> str:\n salt = bcrypt.gensalt()\n # VULNERABILITY: No input validation\n return bcrypt.hashpw(password.encode(), salt) # Line 23 - crash point\n```\n\n## Reproduction Steps\n1. Send POST request to `/auth/login` with password > 72 characters\n2. Application processes authentication request\n3. `hash_password()` called with long password\n4. bcrypt.hashpw() triggers buffer overflow\n5. Segmentation fault occurs, container restarts\n\n## Immediate Fix\n\n### 1. Input Validation (Deploy immediately)\n```python\ndef hash_password(password: str) -> str:\n # Add input validation\n if len(password) > 72:\n raise ValueError(\"Password too long\")\n if not password:\n raise ValueError(\"Password cannot be empty\")\n \n salt = bcrypt.gensalt()\n return bcrypt.hashpw(password.encode(), salt)\n```\n\n### 2. Dependency Update\n```bash\n# Update requirements.txt\nbcrypt>=4.0.1 # Patched version\n\n# Rebuild and redeploy\ndocker build --no-cache .\n```\n\n## Long-term Solutions\n\n### 1. Comprehensive Input Validation\n- Implement request validation middleware\n- Add schema validation for all user inputs\n- Set maximum field lengths application-wide\n\n### 2. Security Hardening\n- Regular dependency scanning (Snyk, OWASP)\n- Automated security testing in CI/CD\n- Runtime application security monitoring\n\n### 3. Error Handling\n- Graceful handling of crypto operations\n- Proper exception propagation\n- User-friendly error messages\n\n## Testing Strategy\n\n### 1. Regression Tests\n```python\ndef test_long_password_handling():\n with pytest.raises(ValueError):\n hash_password('x' * 100) # Should raise, not crash\n\ndef test_empty_password_handling():\n with pytest.raises(ValueError):\n hash_password('')\n```\n\n### 2. Security Tests\n- Fuzzing tests for authentication endpoints\n- Boundary value testing for all inputs\n- Load testing with various input combinations\n\n## Monitoring & Prevention\n\n### 1. Enhanced Logging\n```python\ndef hash_password(password: str) -> str:\n logger.info(f\"Hashing password, length: {len(password)}\")\n # ... validation and hashing\n logger.info(\"Password hashing completed successfully\")\n```\n\n### 2. Alerting\n- Container restart alerts\n- Authentication failure rate monitoring\n- Dependency vulnerability scanning\n\n## Verification Steps\n1. Apply input validation patch\n2. Update bcrypt dependency\n3. Run regression test suite\n4. Deploy to staging environment\n5. Perform security testing\n6. Monitor for crashes (should be eliminated)\n7. Deploy to production with rollback plan",
|
||||
"metadata": {
|
||||
"error_type": "security_vulnerability",
|
||||
"severity": "critical",
|
||||
"root_cause": "buffer_overflow_in_dependency",
|
||||
"fix_complexity": "low",
|
||||
"estimated_fix_time": "2-4 hours",
|
||||
"risk_level": "high",
|
||||
"confidence_level": "high"
|
||||
},
|
||||
"diagnostic_data": {
|
||||
"stack_trace_analyzed": true,
|
||||
"vulnerability_identified": "CVE-2023-xxxxx",
|
||||
"affected_components": ["auth/login.py", "utils/crypto.py"],
|
||||
"reproduction_confirmed": true
|
||||
},
|
||||
"continuation_id": "debug-session-uuid",
|
||||
"status": "success"
|
||||
}
|
||||
```
|
||||
|
||||
## Advanced Debugging Patterns
|
||||
|
||||
### 1. Systematic Investigation Process
|
||||
|
||||
**Phase 1: Problem Definition**
|
||||
```json
|
||||
{
|
||||
"name": "debug",
|
||||
"arguments": {
|
||||
"error_description": "Application experiencing intermittent 500 errors",
|
||||
"error_context": "Initial error logs and basic observations",
|
||||
"thinking_mode": "low"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Phase 2: Deep Analysis**
|
||||
```json
|
||||
{
|
||||
"name": "debug",
|
||||
"arguments": {
|
||||
"error_description": "Refined problem statement based on initial analysis",
|
||||
"error_context": "Complete stack traces, detailed logs, profiling data",
|
||||
"files": ["/workspace/affected_modules/"],
|
||||
"continuation_id": "phase1-analysis-id",
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Phase 3: Solution Validation**
|
||||
```json
|
||||
{
|
||||
"name": "debug",
|
||||
"arguments": {
|
||||
"error_description": "Proposed solution validation and testing strategy",
|
||||
"previous_attempts": "Previous analysis findings and proposed fixes",
|
||||
"continuation_id": "phase2-analysis-id",
|
||||
"thinking_mode": "medium"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Multi-System Integration Debugging
|
||||
|
||||
**Component Isolation**:
|
||||
```json
|
||||
{
|
||||
"name": "debug",
|
||||
"arguments": {
|
||||
"error_description": "Order processing pipeline failing at random points",
|
||||
"files": ["/workspace/order-service/", "/workspace/payment-service/", "/workspace/inventory-service/"],
|
||||
"runtime_info": "Microservices architecture, message queues, distributed database",
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Data Flow Analysis**:
|
||||
```json
|
||||
{
|
||||
"name": "debug",
|
||||
"arguments": {
|
||||
"error_description": "Continuing order pipeline analysis with focus on data flow",
|
||||
"error_context": "Request/response logs, message queue contents, database state",
|
||||
"continuation_id": "component-analysis-id"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Performance Debugging Workflow
|
||||
|
||||
**Resource Analysis**:
|
||||
```json
|
||||
{
|
||||
"name": "debug",
|
||||
"arguments": {
|
||||
"error_description": "Memory usage climbing steadily leading to OOM kills",
|
||||
"error_context": "Memory profiling data, heap dumps, GC logs",
|
||||
"files": ["/workspace/memory-intensive-modules/"],
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Optimization Strategy**:
|
||||
```json
|
||||
{
|
||||
"name": "debug",
|
||||
"arguments": {
|
||||
"error_description": "Memory leak root cause identified, need optimization strategy",
|
||||
"previous_attempts": "Profiling analysis completed, leak sources identified",
|
||||
"continuation_id": "memory-analysis-id"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Large File Analysis Capabilities
|
||||
|
||||
### 1M Token Context Window
|
||||
|
||||
**Comprehensive Log Analysis**:
|
||||
- **Large Log Files**: Full application logs, database logs, system logs
|
||||
- **Memory Dumps**: Complete heap dumps and stack traces
|
||||
- **Profiling Data**: Detailed performance profiling outputs
|
||||
- **Multiple File Types**: Logs, configs, source code, database dumps
|
||||
|
||||
**Example with Large Files**:
|
||||
```json
|
||||
{
|
||||
"name": "debug",
|
||||
"arguments": {
|
||||
"error_description": "Production system crash analysis",
|
||||
"files": [
|
||||
"/workspace/logs/application.log", // 50MB log file
|
||||
"/workspace/logs/database.log", // 30MB log file
|
||||
"/workspace/dumps/heap_dump.txt", // 100MB heap dump
|
||||
"/workspace/traces/stack_trace.log" // 20MB stack trace
|
||||
],
|
||||
"thinking_mode": "max"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Smart File Processing
|
||||
|
||||
**Priority-Based Processing**:
|
||||
1. **Stack Traces**: Immediate analysis for crash cause
|
||||
2. **Error Logs**: Recent errors and patterns
|
||||
3. **Application Logs**: Business logic flow analysis
|
||||
4. **System Logs**: Infrastructure and environment issues
|
||||
|
||||
**Content Analysis**:
|
||||
- **Pattern Recognition**: Recurring errors and trends
|
||||
- **Timeline Analysis**: Event correlation and sequence
|
||||
- **Performance Metrics**: Response times, resource usage
|
||||
- **Dependency Tracking**: External service interactions
|
||||
|
||||
## Integration with Development Workflow
|
||||
|
||||
### 1. CI/CD Integration
|
||||
|
||||
**Automated Debugging**:
|
||||
```json
|
||||
{
|
||||
"name": "debug",
|
||||
"arguments": {
|
||||
"error_description": "Build failure in CI pipeline",
|
||||
"error_context": "CI logs, test output, build artifacts",
|
||||
"files": ["/workspace/.github/workflows/", "/workspace/tests/"],
|
||||
"runtime_info": "GitHub Actions, Docker build, pytest"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Production Incident Response
|
||||
|
||||
**Incident Analysis**:
|
||||
```json
|
||||
{
|
||||
"name": "debug",
|
||||
"arguments": {
|
||||
"error_description": "Production outage - service unavailable",
|
||||
"error_context": "Monitoring alerts, service logs, infrastructure metrics",
|
||||
"files": ["/workspace/monitoring/", "/workspace/logs/"],
|
||||
"runtime_info": "Kubernetes cluster, multiple replicas, load balancer",
|
||||
"thinking_mode": "max"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Code Review Integration
|
||||
|
||||
**Bug Investigation**:
|
||||
```json
|
||||
{
|
||||
"name": "debug",
|
||||
"arguments": {
|
||||
"error_description": "Regression introduced in recent PR",
|
||||
"files": ["/workspace/modified_files/"],
|
||||
"previous_attempts": "Code review completed, tests passing, issue found in production",
|
||||
"runtime_info": "Recent deployment, feature flag enabled"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Effective Error Reporting
|
||||
|
||||
**Comprehensive Error Description**:
|
||||
```
|
||||
Error Description:
|
||||
- What happened: Application crashes during user registration
|
||||
- When: Occurs intermittently, ~10% of registration attempts
|
||||
- Where: Registration form submission, after email validation
|
||||
- Who: Affects both new and existing users
|
||||
- Impact: Users cannot complete registration, data loss possible
|
||||
```
|
||||
|
||||
**Detailed Context Provision**:
|
||||
```
|
||||
Error Context:
|
||||
- Stack trace: [Full stack trace with line numbers]
|
||||
- Request data: [Sanitized request payload]
|
||||
- Environment state: [Memory usage, CPU load, active connections]
|
||||
- Timing: [Request timestamps, duration, timeout values]
|
||||
- Dependencies: [Database state, external API responses]
|
||||
```
|
||||
|
||||
### Debugging Workflow
|
||||
|
||||
1. **Collect Comprehensive Information**: Gather all available diagnostic data
|
||||
2. **Isolate the Problem**: Narrow down to specific components or operations
|
||||
3. **Analyze Dependencies**: Consider external systems and interactions
|
||||
4. **Validate Hypotheses**: Test theories with evidence and reproduction
|
||||
5. **Document Findings**: Create detailed reports for future reference
|
||||
|
||||
### Performance Optimization
|
||||
|
||||
1. **Use Appropriate Thinking Mode**: Match complexity to issue severity
|
||||
2. **Leverage Large Context**: Include comprehensive diagnostic files
|
||||
3. **Iterative Analysis**: Use continuation for complex debugging sessions
|
||||
4. **Cross-Reference**: Compare with similar issues and solutions
|
||||
|
||||
---
|
||||
|
||||
The Debug Tool provides systematic, expert-level debugging capabilities that can handle complex production issues while maintaining accuracy and providing actionable solutions for rapid incident resolution.
|
||||
449
docs/api/tools/precommit.md
Normal file
449
docs/api/tools/precommit.md
Normal file
@@ -0,0 +1,449 @@
|
||||
# Precommit Tool API Reference
|
||||
|
||||
## Overview
|
||||
|
||||
The **Precommit Tool** provides comprehensive automated quality gates and validation before commits. It performs deep analysis of git repositories, validates changes against architectural decisions, and ensures code quality standards are met before committing to version control.
|
||||
|
||||
## Tool Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "precommit",
|
||||
"description": "Automated quality gates before commits",
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": {
|
||||
"type": "string",
|
||||
"description": "Starting directory to search for git repositories (must be absolute path)"
|
||||
},
|
||||
"include_staged": {
|
||||
"type": "boolean",
|
||||
"default": true,
|
||||
"description": "Include staged changes in the review"
|
||||
},
|
||||
"include_unstaged": {
|
||||
"type": "boolean",
|
||||
"default": true,
|
||||
"description": "Include uncommitted (unstaged) changes in the review"
|
||||
},
|
||||
"compare_to": {
|
||||
"type": "string",
|
||||
"description": "Optional: A git ref (branch, tag, commit hash) to compare against",
|
||||
"optional": true
|
||||
},
|
||||
"review_type": {
|
||||
"type": "string",
|
||||
"enum": ["full", "security", "performance", "quick"],
|
||||
"default": "full",
|
||||
"description": "Type of review to perform on the changes"
|
||||
},
|
||||
"severity_filter": {
|
||||
"type": "string",
|
||||
"enum": ["critical", "high", "medium", "all"],
|
||||
"default": "all",
|
||||
"description": "Minimum severity level to report on the changes"
|
||||
},
|
||||
"original_request": {
|
||||
"type": "string",
|
||||
"description": "The original user request description for the changes",
|
||||
"optional": true
|
||||
},
|
||||
"focus_on": {
|
||||
"type": "string",
|
||||
"description": "Specific aspects to focus on (e.g., 'logic for user authentication', 'database query efficiency')",
|
||||
"optional": true
|
||||
},
|
||||
"thinking_mode": {
|
||||
"type": "string",
|
||||
"enum": ["minimal", "low", "medium", "high", "max"],
|
||||
"default": "medium",
|
||||
"description": "Thinking depth for the analysis"
|
||||
},
|
||||
"files": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"description": "Optional files or directories to provide as context",
|
||||
"optional": true
|
||||
},
|
||||
"continuation_id": {
|
||||
"type": "string",
|
||||
"description": "Thread continuation ID for multi-turn conversations",
|
||||
"optional": true
|
||||
}
|
||||
},
|
||||
"required": ["path"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Validation Process
|
||||
|
||||
### 1. Git Repository Analysis
|
||||
|
||||
**Repository Discovery**:
|
||||
- **Recursive Search**: Finds all git repositories within specified path
|
||||
- **Multi-Repository Support**: Handles monorepos and nested repositories
|
||||
- **Branch Detection**: Identifies current branch and tracking status
|
||||
- **Change Detection**: Analyzes staged, unstaged, and committed changes
|
||||
|
||||
**Git State Assessment**:
|
||||
```python
|
||||
# Repository state analysis
|
||||
{
|
||||
"repository_path": "/workspace/project",
|
||||
"current_branch": "feature/user-authentication",
|
||||
"tracking_branch": "origin/main",
|
||||
"ahead_by": 3,
|
||||
"behind_by": 0,
|
||||
"staged_files": 5,
|
||||
"unstaged_files": 2,
|
||||
"untracked_files": 1
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Change Analysis Pipeline
|
||||
|
||||
**Staged Changes Review**:
|
||||
```bash
|
||||
# Git diff analysis for staged changes
|
||||
git diff --staged --name-only
|
||||
git diff --staged --unified=3
|
||||
```
|
||||
|
||||
**Unstaged Changes Review**:
|
||||
```bash
|
||||
# Working directory changes analysis
|
||||
git diff --name-only
|
||||
git diff --unified=3
|
||||
```
|
||||
|
||||
**Commit History Analysis**:
|
||||
```bash
|
||||
# Compare against target branch
|
||||
git diff main...HEAD --name-only
|
||||
git log --oneline main..HEAD
|
||||
```
|
||||
|
||||
### 3. Quality Gate Validation
|
||||
|
||||
**Security Validation**:
|
||||
- **Secret Detection**: Scans for API keys, passwords, tokens
|
||||
- **Vulnerability Assessment**: Identifies security anti-patterns
|
||||
- **Input Validation**: Reviews user input handling
|
||||
- **Authentication Changes**: Validates auth/authz modifications
|
||||
|
||||
**Performance Validation**:
|
||||
- **Algorithm Analysis**: Reviews complexity and efficiency
|
||||
- **Database Changes**: Validates query performance and indexing
|
||||
- **Resource Usage**: Identifies potential memory or CPU issues
|
||||
- **Caching Strategy**: Reviews caching implementation changes
|
||||
|
||||
**Quality Validation**:
|
||||
- **Code Standards**: Enforces coding conventions and style
|
||||
- **Documentation**: Ensures code changes include documentation updates
|
||||
- **Testing**: Validates test coverage and quality
|
||||
- **Technical Debt**: Identifies new debt introduction
|
||||
|
||||
**Architecture Validation**:
|
||||
- **Design Patterns**: Ensures consistency with architectural decisions
|
||||
- **Dependencies**: Reviews new dependencies and their impact
|
||||
- **Integration**: Validates service integration changes
|
||||
- **Breaking Changes**: Identifies potential breaking changes
|
||||
|
||||
## Usage Patterns
|
||||
|
||||
### 1. Standard Pre-Commit Validation
|
||||
|
||||
**Complete validation before committing**:
|
||||
```json
|
||||
{
|
||||
"name": "precommit",
|
||||
"arguments": {
|
||||
"path": "/workspace/project",
|
||||
"include_staged": true,
|
||||
"include_unstaged": false,
|
||||
"review_type": "full",
|
||||
"original_request": "Implemented user authentication with JWT tokens"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Security-Focused Validation
|
||||
|
||||
**Security audit before sensitive commits**:
|
||||
```json
|
||||
{
|
||||
"name": "precommit",
|
||||
"arguments": {
|
||||
"path": "/workspace/security-module",
|
||||
"review_type": "security",
|
||||
"severity_filter": "high",
|
||||
"focus_on": "authentication mechanisms and input validation",
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Feature Branch Validation
|
||||
|
||||
**Comprehensive review before merge**:
|
||||
```json
|
||||
{
|
||||
"name": "precommit",
|
||||
"arguments": {
|
||||
"path": "/workspace/project",
|
||||
"compare_to": "main",
|
||||
"review_type": "full",
|
||||
"original_request": "Complete user management feature with CRUD operations",
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Performance Impact Assessment
|
||||
|
||||
**Performance validation for critical changes**:
|
||||
```json
|
||||
{
|
||||
"name": "precommit",
|
||||
"arguments": {
|
||||
"path": "/workspace/api-module",
|
||||
"review_type": "performance",
|
||||
"focus_on": "database queries and API response times",
|
||||
"compare_to": "main"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Documentation Sync Validation
|
||||
|
||||
**Ensure documentation matches code changes**:
|
||||
```json
|
||||
{
|
||||
"name": "precommit",
|
||||
"arguments": {
|
||||
"path": "/workspace/",
|
||||
"focus_on": "documentation completeness and accuracy",
|
||||
"files": ["/workspace/docs/", "/workspace/README.md"],
|
||||
"original_request": "Updated API endpoints and added new features"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Response Format
|
||||
|
||||
### Comprehensive Validation Report
|
||||
|
||||
```json
|
||||
{
|
||||
"content": "# Pre-Commit Validation Report\n\n## Repository Analysis\n**Repository**: `/workspace/user-auth-service`\n**Branch**: `feature/jwt-authentication`\n**Changes**: 8 files modified, 245 lines added, 67 lines deleted\n**Commit Readiness**: ⚠️ **CONDITIONAL** - Address medium-severity issues\n\n## Change Summary\n### Files Modified (8)\n- `src/auth/jwt_handler.py` (new file, 89 lines)\n- `src/auth/middleware.py` (modified, +45/-12)\n- `src/models/user.py` (modified, +23/-8)\n- `tests/test_auth.py` (modified, +67/-15)\n- `requirements.txt` (modified, +3/-0)\n- `config/settings.py` (modified, +12/-5)\n- `docs/api/authentication.md` (modified, +18/-3)\n- `README.md` (modified, +6/-2)\n\n## Security Analysis ✅ PASSED\n\n### Strengths Identified\n- JWT implementation uses industry-standard `PyJWT` library\n- Proper secret key management via environment variables\n- Token expiration properly configured (24 hours)\n- Password hashing uses secure bcrypt with proper salt rounds\n\n### Security Validations\n- ✅ No hardcoded secrets detected\n- ✅ Input validation implemented for authentication endpoints\n- ✅ Proper error handling without information disclosure\n- ✅ HTTPS enforcement in middleware\n\n## Performance Analysis ⚠️ REVIEW REQUIRED\n\n### Medium Priority Issues (2)\n\n**🟡 Database Query Optimization** (`src/models/user.py:45`)\n```python\n# Current implementation\ndef get_user_by_email(email):\n return User.objects.filter(email=email).first()\n\n# Recommendation: Add database index\n# class User(models.Model):\n# email = models.EmailField(unique=True, db_index=True)\n```\n**Impact**: Authentication queries may be slow without email index\n**Priority**: Medium\n**Fix Time**: 5 minutes (migration required)\n\n**🟡 JWT Token Validation Caching** (`src/auth/jwt_handler.py:67`)\n```python\n# Consider implementing token validation caching\n# to reduce repeated signature verification overhead\n```\n**Impact**: Repeated JWT validation on every request\n**Priority**: Medium\n**Fix Time**: 30 minutes (optional optimization)\n\n## Code Quality Analysis ✅ PASSED\n\n### Quality Metrics\n- **Test Coverage**: 89% (target: 80%) ✅\n- **Code Complexity**: Low-Medium ✅ \n- **Documentation**: Complete API docs updated ✅\n- **Type Hints**: 95% coverage ✅\n\n### New Dependencies Validated\n```\nPyJWT==2.8.0 # Well-maintained, security-focused\nbcrypt==4.0.1 # Latest stable, no known vulnerabilities\npython-dotenv==1.0.0 # Standard environment management\n```\n**Dependency Assessment**: ✅ All dependencies are secure and well-maintained\n\n## Architecture Compliance ✅ PASSED\n\n### Design Pattern Validation\n- ✅ Follows established authentication middleware pattern\n- ✅ Proper separation of concerns (handler/middleware/models)\n- ✅ Consistent error handling approach\n- ✅ RESTful API design maintained\n\n### Integration Validation\n- ✅ Backward compatibility maintained\n- ✅ No breaking changes to existing API endpoints\n- ✅ Proper database migration provided\n- ✅ Environment configuration properly documented\n\n## Documentation Review ✅ PASSED\n\n### Documentation Updates\n- ✅ API documentation updated with authentication endpoints\n- ✅ README.md includes setup instructions for JWT\n- ✅ Environment variable documentation complete\n- ✅ Code comments adequate for complex authentication logic\n\n## Testing Validation ✅ PASSED\n\n### Test Coverage Analysis\n```\nFile Coverage\nsrc/auth/jwt_handler.py 92%\nsrc/auth/middleware.py 87%\nsrc/models/user.py 94%\ntests/test_auth.py 100%\n```\n\n### Test Quality\n- ✅ Unit tests for JWT token generation/validation\n- ✅ Integration tests for authentication endpoints\n- ✅ Edge case testing (expired tokens, invalid signatures)\n- ✅ Error condition testing\n\n## Final Recommendation\n\n### ✅ **APPROVED FOR COMMIT** with minor optimizations\n\n**Required Actions**: None (all critical issues resolved)\n\n**Recommended Actions**:\n1. **Add email index** to User model for better performance\n2. **Consider JWT caching** for high-traffic scenarios\n\n**Commit Confidence**: High (92%)\n\n### Suggested Commit Message\n```\nImplement JWT authentication system\n\n- Add JWT token generation and validation\n- Implement authentication middleware\n- Add secure password hashing with bcrypt\n- Update API documentation\n- Add comprehensive test coverage (89%)\n\nTesting: All tests pass, no breaking changes\nSecurity: No vulnerabilities detected\nPerformance: Minor optimization opportunities identified\n```\n\n## Next Steps\n1. **Immediate**: Commit current changes (approved)\n2. **Short-term**: Add database index for email field\n3. **Future**: Implement JWT validation caching for performance\n4. **Monitoring**: Track authentication performance metrics post-deployment",
|
||||
"metadata": {
|
||||
"repository_path": "/workspace/user-auth-service",
|
||||
"commit_readiness": "approved_with_recommendations",
|
||||
"files_analyzed": 8,
|
||||
"lines_changed": 178,
|
||||
"security_score": 9.2,
|
||||
"performance_score": 7.8,
|
||||
"quality_score": 9.1,
|
||||
"overall_confidence": 92,
|
||||
"validation_time": "12.3s"
|
||||
},
|
||||
"git_analysis": {
|
||||
"current_branch": "feature/jwt-authentication",
|
||||
"staged_files": 8,
|
||||
"unstaged_files": 0,
|
||||
"commits_ahead": 3,
|
||||
"target_branch": "main"
|
||||
},
|
||||
"continuation_id": "precommit-validation-uuid",
|
||||
"status": "success"
|
||||
}
|
||||
```
|
||||
|
||||
### Commit Readiness Levels
|
||||
|
||||
**✅ APPROVED**:
|
||||
- No critical or high-severity issues
|
||||
- All quality gates passed
|
||||
- Documentation complete
|
||||
- Tests comprehensive
|
||||
|
||||
**⚠️ CONDITIONAL**:
|
||||
- Medium-severity issues present
|
||||
- Some quality concerns
|
||||
- Recommendations for improvement
|
||||
- Can commit with awareness of trade-offs
|
||||
|
||||
**❌ BLOCKED**:
|
||||
- Critical security vulnerabilities
|
||||
- High-severity performance issues
|
||||
- Insufficient test coverage
|
||||
- Breaking changes without proper migration
|
||||
|
||||
## Advanced Usage Patterns
|
||||
|
||||
### 1. Cross-Repository Validation
|
||||
|
||||
**Monorepo validation**:
|
||||
```json
|
||||
{
|
||||
"name": "precommit",
|
||||
"arguments": {
|
||||
"path": "/workspace/monorepo",
|
||||
"focus_on": "cross-service impact analysis",
|
||||
"files": ["/workspace/shared-libs/", "/workspace/service-contracts/"],
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Compliance Validation
|
||||
|
||||
**Regulatory compliance check**:
|
||||
```json
|
||||
{
|
||||
"name": "precommit",
|
||||
"arguments": {
|
||||
"path": "/workspace/financial-service",
|
||||
"review_type": "security",
|
||||
"severity_filter": "critical",
|
||||
"focus_on": "PCI DSS compliance and data protection",
|
||||
"thinking_mode": "max"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Migration Safety Validation
|
||||
|
||||
**Database migration validation**:
|
||||
```json
|
||||
{
|
||||
"name": "precommit",
|
||||
"arguments": {
|
||||
"path": "/workspace/api-service",
|
||||
"focus_on": "database migration safety and backward compatibility",
|
||||
"files": ["/workspace/migrations/", "/workspace/models/"],
|
||||
"original_request": "Database schema changes for user profiles feature"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Integration Testing Validation
|
||||
|
||||
**Service integration changes**:
|
||||
```json
|
||||
{
|
||||
"name": "precommit",
|
||||
"arguments": {
|
||||
"path": "/workspace/microservices",
|
||||
"focus_on": "service contract changes and API compatibility",
|
||||
"compare_to": "main",
|
||||
"review_type": "full"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Integration with CI/CD
|
||||
|
||||
### Git Hook Integration
|
||||
|
||||
**Pre-commit hook implementation**:
|
||||
```bash
|
||||
#!/bin/sh
|
||||
# .git/hooks/pre-commit
|
||||
|
||||
echo "Running pre-commit validation..."
|
||||
|
||||
# Call precommit tool via MCP
|
||||
claude-code-cli --tool precommit --path "$(pwd)" --review-type full
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Pre-commit validation failed. Commit blocked."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Pre-commit validation passed. Proceeding with commit."
|
||||
```
|
||||
|
||||
### GitHub Actions Integration
|
||||
|
||||
**CI workflow with precommit validation**:
|
||||
```yaml
|
||||
name: Pre-commit Validation
|
||||
on: [pull_request]
|
||||
|
||||
jobs:
|
||||
validate:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Run Precommit Validation
|
||||
run: |
|
||||
claude-code-cli --tool precommit \
|
||||
--path ${{ github.workspace }} \
|
||||
--compare-to origin/main \
|
||||
--review-type full
|
||||
```
|
||||
|
||||
## Memory Bank Integration
|
||||
|
||||
### Architectural Decision Alignment
|
||||
|
||||
**Query past architectural decisions**:
|
||||
```python
|
||||
# Check alignment with architectural principles
|
||||
architectural_decisions = memory.search_nodes("architecture security authentication")
|
||||
design_patterns = memory.search_nodes("design patterns authentication")
|
||||
```
|
||||
|
||||
**Validate against established patterns**:
|
||||
```python
|
||||
# Ensure changes follow established patterns
|
||||
validation_results = memory.search_nodes("validation authentication security")
|
||||
previous_reviews = memory.search_nodes("code review authentication")
|
||||
```
|
||||
|
||||
### Context Preservation
|
||||
|
||||
**Store validation findings**:
|
||||
```python
|
||||
# Store precommit validation results
|
||||
memory.create_entities([{
|
||||
"name": "Precommit Validation - JWT Authentication",
|
||||
"entityType": "quality_records",
|
||||
"observations": [
|
||||
"Security validation passed with high confidence",
|
||||
"Performance optimizations recommended but not blocking",
|
||||
"Documentation complete and accurate",
|
||||
"Test coverage exceeds target threshold"
|
||||
]
|
||||
}])
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Effective Validation Strategy
|
||||
|
||||
1. **Regular Validation**: Use precommit for every commit, not just major changes
|
||||
2. **Contextual Focus**: Provide original request context for better validation
|
||||
3. **Incremental Analysis**: Use continuation for complex multi-part features
|
||||
4. **Severity Appropriate**: Match thinking mode to change complexity and risk
|
||||
|
||||
### Repository Management
|
||||
|
||||
1. **Clean Working Directory**: Ensure clean state before validation
|
||||
2. **Targeted Analysis**: Focus on changed files and their dependencies
|
||||
3. **Branch Strategy**: Compare against appropriate target branch
|
||||
4. **Documentation Sync**: Always validate documentation completeness
|
||||
|
||||
### Quality Gates
|
||||
|
||||
1. **Security First**: Never compromise on security findings
|
||||
2. **Performance Aware**: Consider performance impact of all changes
|
||||
3. **Test Coverage**: Maintain or improve test coverage with changes
|
||||
4. **Documentation Currency**: Keep documentation synchronized with code
|
||||
|
||||
---
|
||||
|
||||
The Precommit Tool provides comprehensive, automated quality assurance that integrates seamlessly with development workflows while maintaining high standards for security, performance, and code quality.
|
||||
476
docs/api/tools/thinkdeep.md
Normal file
476
docs/api/tools/thinkdeep.md
Normal file
@@ -0,0 +1,476 @@
|
||||
# ThinkDeep Tool API Reference
|
||||
|
||||
## Overview
|
||||
|
||||
The **ThinkDeep Tool** provides access to Gemini's maximum analytical capabilities for complex architecture decisions, system design, and strategic planning. It's designed for comprehensive analysis that requires deep computational thinking and extensive reasoning.
|
||||
|
||||
## Tool Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "thinkdeep",
|
||||
"description": "Complex architecture, system design, strategic planning",
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"current_analysis": {
|
||||
"type": "string",
|
||||
"description": "Your current thinking/analysis to extend and validate"
|
||||
},
|
||||
"problem_context": {
|
||||
"type": "string",
|
||||
"description": "Additional context about the problem or goal",
|
||||
"optional": true
|
||||
},
|
||||
"focus_areas": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"description": "Specific aspects to focus on (architecture, performance, security, etc.)",
|
||||
"optional": true
|
||||
},
|
||||
"files": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"description": "Optional file paths or directories for additional context",
|
||||
"optional": true
|
||||
},
|
||||
"thinking_mode": {
|
||||
"type": "string",
|
||||
"enum": ["minimal", "low", "medium", "high", "max"],
|
||||
"default": "high",
|
||||
"description": "Thinking depth for analysis"
|
||||
},
|
||||
"temperature": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"default": 0.7,
|
||||
"description": "Temperature for creative thinking"
|
||||
},
|
||||
"continuation_id": {
|
||||
"type": "string",
|
||||
"description": "Thread continuation ID for multi-turn conversations",
|
||||
"optional": true
|
||||
}
|
||||
},
|
||||
"required": ["current_analysis"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Usage Patterns
|
||||
|
||||
### 1. Architecture Decision Making
|
||||
|
||||
**Ideal For**:
|
||||
- Evaluating architectural alternatives
|
||||
- Designing system components
|
||||
- Planning scalability strategies
|
||||
- Technology selection decisions
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "thinkdeep",
|
||||
"arguments": {
|
||||
"current_analysis": "We have an MCP server that needs to handle 100+ concurrent Claude sessions. Currently using single-threaded processing with Redis for conversation memory.",
|
||||
"problem_context": "Growing user base requires better performance and reliability. Budget allows for infrastructure changes.",
|
||||
"focus_areas": ["scalability", "performance", "reliability", "cost"],
|
||||
"thinking_mode": "max"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. System Design Exploration
|
||||
|
||||
**Ideal For**:
|
||||
- Complex system architecture
|
||||
- Integration pattern analysis
|
||||
- Security architecture design
|
||||
- Performance optimization strategies
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "thinkdeep",
|
||||
"arguments": {
|
||||
"current_analysis": "Need to design a secure file processing pipeline that handles user uploads, virus scanning, content analysis, and storage with audit trails.",
|
||||
"focus_areas": ["security", "performance", "compliance", "monitoring"],
|
||||
"files": ["/workspace/security/", "/workspace/processing/"],
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Strategic Technical Planning
|
||||
|
||||
**Ideal For**:
|
||||
- Long-term technical roadmaps
|
||||
- Migration strategies
|
||||
- Technology modernization
|
||||
- Risk assessment and mitigation
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "thinkdeep",
|
||||
"arguments": {
|
||||
"current_analysis": "Legacy monolithic application needs migration to microservices. 500K+ LOC, 50+ developers, critical business system with 99.9% uptime requirement.",
|
||||
"problem_context": "Must maintain business continuity while modernizing. Team has limited microservices experience.",
|
||||
"focus_areas": ["migration_strategy", "risk_mitigation", "team_training", "timeline"],
|
||||
"thinking_mode": "max",
|
||||
"temperature": 0.3
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Problem Solving & Innovation
|
||||
|
||||
**Ideal For**:
|
||||
- Novel technical challenges
|
||||
- Creative solution development
|
||||
- Cross-domain problem analysis
|
||||
- Innovation opportunities
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"name": "thinkdeep",
|
||||
"arguments": {
|
||||
"current_analysis": "AI model serving platform needs to optimize GPU utilization across heterogeneous hardware while minimizing latency and maximizing throughput.",
|
||||
"focus_areas": ["resource_optimization", "scheduling", "performance", "cost_efficiency"],
|
||||
"thinking_mode": "max",
|
||||
"temperature": 0.8
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Parameter Details
|
||||
|
||||
### current_analysis (required)
|
||||
- **Type**: string
|
||||
- **Purpose**: Starting point for deep analysis and extension
|
||||
- **Best Practices**:
|
||||
- Provide comprehensive background and context
|
||||
- Include current understanding and assumptions
|
||||
- Mention constraints and requirements
|
||||
- Reference specific challenges or decision points
|
||||
|
||||
**Example Structure**:
|
||||
```
|
||||
Current Analysis:
|
||||
- Problem: [Clear problem statement]
|
||||
- Context: [Business/technical context]
|
||||
- Current State: [What exists now]
|
||||
- Requirements: [What needs to be achieved]
|
||||
- Constraints: [Technical, business, resource limitations]
|
||||
- Open Questions: [Specific areas needing analysis]
|
||||
```
|
||||
|
||||
### problem_context (optional)
|
||||
- **Type**: string
|
||||
- **Purpose**: Additional contextual information
|
||||
- **Usage**:
|
||||
- Business requirements and priorities
|
||||
- Technical constraints and dependencies
|
||||
- Team capabilities and limitations
|
||||
- Timeline and budget considerations
|
||||
|
||||
### focus_areas (optional)
|
||||
- **Type**: array of strings
|
||||
- **Purpose**: Directs analysis toward specific aspects
|
||||
- **Common Values**:
|
||||
- **Technical**: `architecture`, `performance`, `scalability`, `security`
|
||||
- **Operational**: `reliability`, `monitoring`, `deployment`, `maintenance`
|
||||
- **Business**: `cost`, `timeline`, `risk`, `compliance`
|
||||
- **Team**: `skills`, `training`, `processes`, `communication`
|
||||
|
||||
### thinking_mode (optional)
|
||||
- **Type**: string enum
|
||||
- **Default**: "high"
|
||||
- **Purpose**: Controls depth and computational budget
|
||||
- **Recommendations by Use Case**:
|
||||
- **high** (16384 tokens): Standard complex analysis
|
||||
- **max** (32768 tokens): Critical decisions, comprehensive research
|
||||
- **medium** (8192 tokens): Moderate complexity, time-sensitive decisions
|
||||
- **low** (2048 tokens): Quick strategic input (unusual for thinkdeep)
|
||||
|
||||
### temperature (optional)
|
||||
- **Type**: number (0.0 - 1.0)
|
||||
- **Default**: 0.7
|
||||
- **Purpose**: Balances analytical rigor with creative exploration
|
||||
- **Guidelines**:
|
||||
- **0.0-0.3**: High accuracy, conservative recommendations (critical systems)
|
||||
- **0.4-0.7**: Balanced analysis with creative alternatives (most use cases)
|
||||
- **0.8-1.0**: High creativity, innovative solutions (research, innovation)
|
||||
|
||||
## Response Format
|
||||
|
||||
### Comprehensive Analysis Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"content": "# Deep Analysis Report\n\n## Executive Summary\n[High-level findings and recommendations]\n\n## Current State Analysis\n[Detailed assessment of existing situation]\n\n## Alternative Approaches\n[Multiple solution paths with trade-offs]\n\n## Recommended Strategy\n[Specific recommendations with rationale]\n\n## Implementation Roadmap\n[Phased approach with milestones]\n\n## Risk Assessment\n[Potential challenges and mitigation strategies]\n\n## Success Metrics\n[Measurable outcomes and KPIs]\n\n## Next Steps\n[Immediate actions and decision points]",
|
||||
"metadata": {
|
||||
"thinking_mode": "high",
|
||||
"analysis_depth": "comprehensive",
|
||||
"alternatives_considered": 5,
|
||||
"focus_areas": ["architecture", "performance", "scalability"],
|
||||
"confidence_level": "high",
|
||||
"tokens_used": 15840,
|
||||
"analysis_time": "8.2s"
|
||||
},
|
||||
"continuation_id": "arch-analysis-550e8400",
|
||||
"status": "success"
|
||||
}
|
||||
```
|
||||
|
||||
### Analysis Components
|
||||
|
||||
**Executive Summary**:
|
||||
- Key findings in 2-3 sentences
|
||||
- Primary recommendation
|
||||
- Critical decision points
|
||||
- Success probability assessment
|
||||
|
||||
**Current State Analysis**:
|
||||
- Strengths and weaknesses of existing approach
|
||||
- Technical debt and architectural issues
|
||||
- Performance bottlenecks and limitations
|
||||
- Security and compliance gaps
|
||||
|
||||
**Alternative Approaches**:
|
||||
- 3-5 distinct solution paths
|
||||
- Trade-off analysis for each option
|
||||
- Resource requirements and timelines
|
||||
- Risk profiles and success factors
|
||||
|
||||
**Recommended Strategy**:
|
||||
- Detailed recommendation with clear rationale
|
||||
- Step-by-step implementation approach
|
||||
- Resource allocation and timeline
|
||||
- Success criteria and validation methods
|
||||
|
||||
**Risk Assessment**:
|
||||
- Technical risks and mitigation strategies
|
||||
- Business risks and contingency plans
|
||||
- Team and organizational challenges
|
||||
- External dependencies and uncertainties
|
||||
|
||||
## Advanced Usage Patterns
|
||||
|
||||
### 1. Multi-Phase Analysis
|
||||
|
||||
**Phase 1: Problem Exploration**
|
||||
```json
|
||||
{
|
||||
"name": "thinkdeep",
|
||||
"arguments": {
|
||||
"current_analysis": "Initial problem statement and context",
|
||||
"focus_areas": ["problem_definition", "requirements_analysis"],
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Phase 2: Solution Development**
|
||||
```json
|
||||
{
|
||||
"name": "thinkdeep",
|
||||
"arguments": {
|
||||
"current_analysis": "Previous analysis findings + refined problem definition",
|
||||
"focus_areas": ["solution_design", "architecture", "implementation"],
|
||||
"continuation_id": "previous-analysis-id",
|
||||
"thinking_mode": "max"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Phase 3: Implementation Planning**
|
||||
```json
|
||||
{
|
||||
"name": "thinkdeep",
|
||||
"arguments": {
|
||||
"current_analysis": "Chosen solution approach + design details",
|
||||
"focus_areas": ["implementation_strategy", "risk_mitigation", "timeline"],
|
||||
"continuation_id": "previous-analysis-id",
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Adversarial Analysis
|
||||
|
||||
**Primary Analysis**:
|
||||
```json
|
||||
{
|
||||
"name": "thinkdeep",
|
||||
"arguments": {
|
||||
"current_analysis": "Proposed solution with detailed rationale",
|
||||
"focus_areas": ["solution_validation", "feasibility"],
|
||||
"thinking_mode": "high",
|
||||
"temperature": 0.4
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Devil's Advocate Review**:
|
||||
```json
|
||||
{
|
||||
"name": "thinkdeep",
|
||||
"arguments": {
|
||||
"current_analysis": "Previous analysis + instruction to challenge assumptions and find flaws",
|
||||
"focus_areas": ["risk_analysis", "failure_modes", "alternative_perspectives"],
|
||||
"continuation_id": "primary-analysis-id",
|
||||
"thinking_mode": "high",
|
||||
"temperature": 0.6
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Collaborative Decision Making
|
||||
|
||||
**Technical Analysis**:
|
||||
```json
|
||||
{
|
||||
"name": "thinkdeep",
|
||||
"arguments": {
|
||||
"current_analysis": "Technical requirements and constraints",
|
||||
"focus_areas": ["technical_feasibility", "architecture", "performance"],
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Business Analysis**:
|
||||
```json
|
||||
{
|
||||
"name": "thinkdeep",
|
||||
"arguments": {
|
||||
"current_analysis": "Technical findings + business context",
|
||||
"focus_areas": ["business_value", "cost_benefit", "strategic_alignment"],
|
||||
"continuation_id": "technical-analysis-id",
|
||||
"thinking_mode": "high"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Integration with Other Tools
|
||||
|
||||
### ThinkDeep → CodeReview Flow
|
||||
|
||||
```json
|
||||
// 1. Strategic analysis
|
||||
{
|
||||
"name": "thinkdeep",
|
||||
"arguments": {
|
||||
"current_analysis": "Need to refactor authentication system for better security",
|
||||
"focus_areas": ["security", "architecture"]
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Detailed code review based on strategic insights
|
||||
{
|
||||
"name": "codereview",
|
||||
"arguments": {
|
||||
"files": ["/workspace/auth/"],
|
||||
"context": "Strategic analysis identified need for security-focused refactoring",
|
||||
"review_type": "security",
|
||||
"continuation_id": "strategic-analysis-id"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### ThinkDeep → Analyze Flow
|
||||
|
||||
```json
|
||||
// 1. High-level strategy
|
||||
{
|
||||
"name": "thinkdeep",
|
||||
"arguments": {
|
||||
"current_analysis": "System performance issues under high load",
|
||||
"focus_areas": ["performance", "scalability"]
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Detailed codebase analysis
|
||||
{
|
||||
"name": "analyze",
|
||||
"arguments": {
|
||||
"files": ["/workspace/"],
|
||||
"question": "Identify performance bottlenecks based on strategic analysis",
|
||||
"analysis_type": "performance",
|
||||
"continuation_id": "strategy-analysis-id"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Response Times by Thinking Mode
|
||||
- **medium**: 4-8 seconds (unusual for thinkdeep)
|
||||
- **high**: 8-15 seconds (recommended default)
|
||||
- **max**: 15-30 seconds (comprehensive analysis)
|
||||
|
||||
### Quality Indicators
|
||||
- **Depth**: Number of alternatives considered
|
||||
- **Breadth**: Range of focus areas covered
|
||||
- **Precision**: Specificity of recommendations
|
||||
- **Actionability**: Clarity of next steps
|
||||
|
||||
### Resource Usage
|
||||
- **Memory**: 200-500MB per analysis session
|
||||
- **Network**: High (extensive Gemini API usage)
|
||||
- **Storage**: Redis conversation persistence (48h TTL for complex analyses)
|
||||
- **CPU**: Low (primarily network I/O bound)
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Effective Analysis Prompts
|
||||
|
||||
**Provide Rich Context**:
|
||||
```
|
||||
Current Analysis:
|
||||
We're designing a real-time collaborative editing system like Google Docs.
|
||||
Key requirements:
|
||||
- Support 1000+ concurrent users per document
|
||||
- Sub-100ms latency for edits
|
||||
- Conflict resolution for simultaneous edits
|
||||
- Offline support with sync
|
||||
|
||||
Current challenges:
|
||||
- Operational Transform vs CRDT decision
|
||||
- Server architecture (centralized vs distributed)
|
||||
- Client-side performance with large documents
|
||||
- Database design for version history
|
||||
|
||||
Constraints:
|
||||
- Team of 8 developers (2 senior, 6 mid-level)
|
||||
- 6-month timeline
|
||||
- Cloud-first deployment (AWS/Azure)
|
||||
- Must integrate with existing authentication system
|
||||
```
|
||||
|
||||
**Focus on Decisions**:
|
||||
- Frame analysis around specific decisions that need to be made
|
||||
- Include decision criteria and trade-offs
|
||||
- Mention stakeholders and their priorities
|
||||
- Reference timeline and resource constraints
|
||||
|
||||
### Conversation Management
|
||||
|
||||
1. **Use Continuation for Related Analyses**: Build complex understanding over multiple calls
|
||||
2. **Reference Previous Insights**: Explicitly connect new analysis to previous findings
|
||||
3. **Validate Assumptions**: Use follow-up calls to challenge and refine thinking
|
||||
4. **Document Decisions**: Capture key insights for future reference
|
||||
|
||||
### Quality Optimization
|
||||
|
||||
1. **Match Thinking Mode to Complexity**: Use 'max' only for truly complex decisions
|
||||
2. **Balance Temperature**: Lower for critical systems, higher for innovation
|
||||
3. **Iterative Refinement**: Multiple focused analyses often better than single broad one
|
||||
4. **Cross-Validation**: Use adversarial analysis for critical decisions
|
||||
|
||||
---
|
||||
|
||||
The ThinkDeep Tool serves as your strategic thinking partner, providing comprehensive analysis and creative problem-solving capabilities for the most challenging technical and architectural decisions.
|
||||
Reference in New Issue
Block a user