Merge branch 'BeehiveInnovations:main' into feat-local_support_with_UTF-8_encoding-update

This commit is contained in:
OhMyApps
2025-06-23 12:51:56 +02:00
committed by GitHub
25 changed files with 1185 additions and 220 deletions

View File

@@ -690,7 +690,7 @@ When a user requests a model (e.g., "pro", "o3", "example-large-v1"), the system
2. OpenAI skips (Gemini already handled it)
3. OpenRouter never sees it
### Example: Model "claude-3-opus"
### Example: Model "claude-4-opus"
1. **Gemini provider** checks: NO, not my model → skip
2. **OpenAI provider** checks: NO, not my model → skip

View File

@@ -41,9 +41,9 @@ The server uses `conf/custom_models.json` to map convenient aliases to both Open
| Alias | Maps to OpenRouter Model |
|-------|-------------------------|
| `opus` | `anthropic/claude-3-opus` |
| `sonnet`, `claude` | `anthropic/claude-3-sonnet` |
| `haiku` | `anthropic/claude-3-haiku` |
| `opus` | `anthropic/claude-opus-4` |
| `sonnet`, `claude` | `anthropic/claude-sonnet-4` |
| `haiku` | `anthropic/claude-3.5-haiku` |
| `gpt4o`, `4o` | `openai/gpt-4o` |
| `gpt4o-mini`, `4o-mini` | `openai/gpt-4o-mini` |
| `pro`, `gemini` | `google/gemini-2.5-pro` |
@@ -151,8 +151,8 @@ CUSTOM_MODEL_NAME=your-loaded-model
**Using model aliases (from conf/custom_models.json):**
```
# OpenRouter models:
"Use opus for deep analysis" # → anthropic/claude-3-opus
"Use sonnet to review this code" # → anthropic/claude-3-sonnet
"Use opus for deep analysis" # → anthropic/claude-opus-4
"Use sonnet to review this code" # → anthropic/claude-sonnet-4
"Use pro via zen to analyze this" # → google/gemini-2.5-pro
"Use gpt4o via zen to analyze this" # → openai/gpt-4o
"Use mistral via zen to optimize" # → mistral/mistral-large
@@ -165,7 +165,7 @@ CUSTOM_MODEL_NAME=your-loaded-model
**Using full model names:**
```
# OpenRouter models:
"Use anthropic/claude-3-opus via zen for deep analysis"
"Use anthropic/claude-opus-4 via zen for deep analysis"
"Use openai/gpt-4o via zen to debug this"
"Use deepseek/deepseek-coder via zen to generate code"
@@ -249,7 +249,7 @@ Edit `conf/custom_models.json` to add new models. The configuration supports bot
Popular models available through OpenRouter:
- **GPT-4** - OpenAI's most capable model
- **Claude 3** - Anthropic's models (Opus, Sonnet, Haiku)
- **Claude 4** - Anthropic's models (Opus, Sonnet, Haiku)
- **Mistral** - Including Mistral Large
- **Llama 3** - Meta's open models
- Many more at [openrouter.ai/models](https://openrouter.ai/models)
@@ -258,4 +258,4 @@ Popular models available through OpenRouter:
- **"Model not found"**: Check exact model name at openrouter.ai/models
- **"Insufficient credits"**: Add credits to your OpenRouter account
- **"Model not available"**: Check your OpenRouter dashboard for model access permissions
- **"Model not available"**: Check your OpenRouter dashboard for model access permissions

View File

@@ -2,19 +2,21 @@
**Step-by-step investigation followed by expert debugging assistance**
The `debug` tool guides Claude through a systematic investigation process where Claude performs methodical code examination, evidence collection, and hypothesis formation across multiple steps. Once the investigation is complete, the tool provides expert analysis from the selected AI model based on all gathered findings.
## Thinking Mode
**Default is `medium` (8,192 tokens).** Use `high` for tricky bugs (investment in finding root cause) or `low` for simple errors (save tokens).
The `debug` workflow guides Claude through a systematic investigation process where Claude performs methodical code
examination, evidence collection, and hypothesis formation across multiple steps. Once the investigation is complete,
the tool provides expert analysis from the selected AI model (optionally) based on all gathered findings.
## Example Prompts
**Basic Usage:**
```
Get gemini to debug why my API returns 400 errors randomly with the full stack trace: [paste traceback]
```
You can also ask it to debug on its own, no external model required (**recommended in most cases**).
```
Use debug tool to find out why the app is crashing, here are some app logs [paste app logs] and a crash trace: [paste crash trace]
```
## How It Works
The debug tool implements a **systematic investigation methodology** where Claude is guided through structured debugging steps:
@@ -78,39 +80,34 @@ This structured approach ensures Claude performs methodical groundwork before ex
## Usage Examples
**Basic Error Debugging:**
**Error Debugging:**
```
"Debug this TypeError: 'NoneType' object has no attribute 'split' in my parser.py"
Debug this TypeError: 'NoneType' object has no attribute 'split' in my parser.py
```
**With Stack Trace:**
```
"Use gemini to debug why my API returns 500 errors with this stack trace: [paste full traceback]"
Use gemini to debug why my API returns 500 errors with this stack trace: [paste full traceback]
```
**With File Context:**
```
"Debug the authentication failure in auth.py and user_model.py with o3"
Debug without using external model, the authentication failure in auth.py and user_model.py
```
**Performance Debugging:**
```
"Use pro to debug why my application is consuming excessive memory during bulk operations"
```
**With Visual Context:**
```
"Debug this crash using the error screenshot and the related crash_report.log"
Debug without using external model to find out why the app is consuming excessive memory during bulk edit operations
```
**Runtime Environment Issues:**
```
"Debug deployment issues with server startup failures, here's the runtime info: [environment details]"
Debug deployment issues with server startup failures, here's the runtime info: [environment details]
```
## Investigation Methodology
The debug tool enforces a structured investigation process:
The debug tool enforces a thorough, structured investigation process:
**Step-by-Step Investigation (Claude-Led):**
1. **Initial Problem Description:** Claude describes the issue and begins thinking about possible causes, side-effects, and contributing factors
@@ -120,7 +117,7 @@ The debug tool enforces a structured investigation process:
5. **Iterative Refinement:** Claude can backtrack and revise previous steps as understanding evolves
6. **Investigation Completion:** Claude signals when sufficient evidence has been gathered
**Expert Analysis Phase (AI Model):**
**Expert Analysis Phase (Another AI Model When Used):**
Once investigation is complete, the selected AI model performs:
- **Root Cause Analysis:** Deep analysis of all investigation findings and evidence
- **Solution Recommendations:** Specific fixes with implementation guidance