Compare commits

...

6 Commits

Author SHA1 Message Date
3feedd5698 consolidated readme 2026-02-08 20:27:35 +01:00
eb8553ce0b security: lock down OpenCode containers to read-only legal research
Add defense-in-depth restrictions via agent config and global permissions:
- Global permission layer denies bash, edit, webfetch, lsp
- Build agent tools restricted to read-only (grep/glob/list/read/todo)
- General/explore subagents locked to read-only
- Plan agent disabled to prevent mode switching
- Custom system prompt for legal research context (temp=0.2)
2026-02-08 20:22:57 +01:00
7dae8faf62 security: fix timing attack vulnerability and incorrect method call
- Use secrets.compare_digest() for token comparison instead of == to
  prevent timing-based attacks that could leak token information
- Fix rotate_session_auth_token() to call the correct method
  rotate_session_token() instead of non-existent rotate_session_auth_token()
2026-02-05 00:36:07 +01:00
2cb5263d9e feat: add comprehensive OpenCode API endpoint proxies
Added proxy routes for all OpenCode internal API endpoints to support
full application functionality when accessed via session manager:
- project, agent, config, model endpoints
- thread, chat, conversation endpoints
- command, mcp, lsp, vcs endpoints
- permission, question, event, status endpoints
- internal session endpoint (distinct from container sessions)

Also updated Caddyfile for routing configuration.
2026-02-05 00:33:58 +01:00
d6f2ea90a8 fix: add missing _get_container_info method to AsyncDockerClient
docker_service.get_container_info() was calling self._docker_client._get_container_info()
but AsyncDockerClient didn't have this method, causing silent AttributeError and
returning None, which triggered false health check failures.

Added _get_container_info() using aiodocker's container.show() to properly retrieve
container state information for health monitoring.
2026-02-04 22:04:29 +01:00
69d18cc494 fix: session stability improvements
- Fix docker client initialization bug in app.py (context manager was closing client)
- Add restart_session() method to preserve session IDs during container restarts
- Add 60-second startup grace period before health checking new sessions
- Fix _stop_container and _get_container_info to use docker_service API consistently
- Disable mDNS in Dockerfile to prevent Bonjour service name conflicts
- Remove old container before restart to free port bindings
2026-02-04 19:10:03 +01:00
14 changed files with 700 additions and 313 deletions

4
.gitignore vendored
View File

@@ -1,2 +1,6 @@
__pycache__
.env
node_modules
cypress/screenshots
cypress/videos
cypress/downloads

View File

@@ -28,5 +28,5 @@ EXPOSE 8080
# Set environment variables
ENV PYTHONPATH=/app
# Start OpenCode server (OPENCODE_API_KEY passed via environment)
CMD ["/bin/bash", "-c", "source /root/.bashrc && opencode serve --hostname 0.0.0.0 --port 8080 --mdns"]
# Start OpenCode server (mDNS disabled to prevent conflicts between containers)
CMD ["/bin/bash", "-c", "source /root/.bashrc && opencode serve --hostname 0.0.0.0 --port 8080"]

View File

@@ -1,66 +0,0 @@
# Lovdata Chat Development Environment
This setup creates a container-per-visitor architecture for the Norwegian legal research chat interface with socket-based Docker communication.
## Quick Start
1. **Set up environment variables:**
```bash
cp .env.example .env
# Edit .env with your API keys and MCP server URL
```
3. **Start the services:**
```bash
docker-compose up --build
```
4. **Create a session:**
```bash
curl http://localhost/api/sessions -X POST
```
5. **Access the chat interface:**
Open the returned URL in your browser
## Architecture
- **session-manager**: FastAPI service managing container lifecycles with socket-based Docker communication
- **lovdata-mcp**: External Norwegian legal research MCP server (configured via MCP_SERVER env var)
- **caddy**: Reverse proxy with dynamic session-based routing
## Security Features
- **Socket-based Docker communication**: Direct Unix socket access for container management
- **Container isolation**: Each visitor gets dedicated container with resource limits
- **Automatic cleanup**: Sessions expire after 60 minutes of inactivity
- **Resource quotas**: 4GB RAM, 1 CPU core per container, max 3 concurrent sessions
## Development Notes
- Session data persists in ./sessions/ directory
- Docker socket mounted from host for development
- External MCP server configured via environment variables
- Health checks ensure service reliability
## API Endpoints
- `POST /api/sessions` - Create new session
- `GET /api/sessions` - List all sessions
- `GET /api/sessions/{id}` - Get session info
- `DELETE /api/sessions/{id}` - Delete session
- `POST /api/cleanup` - Manual cleanup
- `GET /api/health` - Health check
- `/{path}` - Dynamic proxy routing (with X-Session-ID header)
## Environment Variables
```bash
# Required
MCP_SERVER=http://your-lovdata-mcp-server:8001
# Optional LLM API keys
OPENAI_API_KEY=your_key
ANTHROPIC_API_KEY=your_key
GOOGLE_API_KEY=your_key
```

329
README.md
View File

@@ -1,239 +1,162 @@
# Lovdata Chat Interface
A web-based chat interface that allows users to interact with Large Language Models (LLMs) equipped with Norwegian legal research tools from the Lovdata MCP server.
## Overview
This project creates a chat interface where users can:
- Choose from multiple LLM providers (OpenAI, Anthropic, Google Gemini)
- Have conversations enhanced with Norwegian legal document search capabilities
- Access laws, regulations, and legal provisions through AI-powered semantic search
- Receive properly cited legal information with cross-references
A container-per-session architecture for Norwegian legal research. Each user session gets an isolated [OpenCode](https://opencode.ai/) container connected to the external [Lovdata MCP server](https://modelcontextprotocol.io/), which provides 15+ tools for searching Norwegian laws, provisions, and cross-references.
## Architecture
### Backend (FastAPI)
- **LLM Provider Layer**: Abstract interface supporting multiple LLM providers with tool calling
- **MCP Integration**: Client connection to lovdata-ai MCP server
- **Skill System**: Norwegian legal research guidance and best practices
- **Chat Management**: Conversation history, streaming responses, session management
```
Users → Caddy (reverse proxy) → Session Manager (FastAPI)
Docker-in-Docker daemon
↓ ↓ ↓
[OC 1] [OC 2] [OC 3] ← OpenCode containers
↓ ↓ ↓
Lovdata MCP Server (external)
LLM APIs (OpenAI/Anthropic/Google)
```
### Frontend (Next.js)
- **Chat Interface**: Real-time messaging with streaming responses
- **Model Selector**: Dropdown to choose LLM provider and model
- **Tool Visualization**: Display when legal tools are being used
- **Citation Rendering**: Properly formatted legal references and cross-references
| Component | Purpose |
|-----------|---------|
| **Session Manager** | FastAPI service managing OpenCode container lifecycles |
| **OpenCode Containers** | Isolated chat environments with MCP integration |
| **Lovdata MCP Server** | External Norwegian legal research (laws, provisions, cross-references) |
| **Caddy** | Reverse proxy with dynamic session-based routing |
| **PostgreSQL** | Session persistence across restarts |
| **Docker-in-Docker** | TLS-secured Docker daemon for container management |
### External Dependencies
- **Lovdata MCP Server**: Provides 15+ tools for Norwegian legal research
- **PostgreSQL Database**: Vector embeddings for semantic search
- **LLM APIs**: OpenAI, Anthropic, Google Gemini (with API keys)
### Session Manager Components
## Supported LLM Providers
```
main.py → FastAPI endpoints, session lifecycle orchestration
docker_service.py → Docker abstraction layer (testable, mockable)
async_docker_client.py → Async Docker operations
database.py → PostgreSQL session persistence with asyncpg
session_auth.py → Token-based session authentication
container_health.py → Health monitoring and auto-recovery
resource_manager.py → CPU/memory limits, throttling
http_pool.py → Connection pooling for container HTTP requests
host_ip_detector.py → Docker host IP detection
logging_config.py → Structured JSON logging with context
```
| Provider | Models | Tool Support | Notes |
|----------|--------|--------------|-------|
| OpenAI | GPT-4, GPT-4o | ✅ Native | Requires API key |
| Anthropic | Claude-3.5-Sonnet | ✅ Native | Requires API key |
| Google | Gemini-1.5-Pro | ✅ Function calling | Requires API key |
| Local | Ollama models | ⚠️ Limited | Self-hosted option |
## Quick Start
## MCP Tools Available
1. **Set up environment variables:**
```bash
cp .env.example .env
# Edit .env with your API keys and MCP server URL
```
The interface integrates all tools from the lovdata-ai MCP server:
2. **Start the services:**
```bash
docker-compose up --build
```
### Law Document Tools
- `get_law`: Retrieve specific laws by ID or title
- `list_laws`: Browse laws with filtering and pagination
- `get_law_content`: Get HTML content of laws
- `get_law_text`: Get plain text content
3. **Create a session:**
```bash
curl http://localhost/api/sessions -X POST
```
### Search Tools
- `search_laws_fulltext`: Full-text search in laws
- `search_laws_semantic`: Semantic search using vector embeddings
- `search_provisions_fulltext`: Full-text search in provisions
- `search_provisions_semantic`: Semantic search in provisions
4. **Access the chat interface** at the URL returned in step 3.
### Provision Tools
- `get_provision`: Get individual legal provisions
- `list_provisions`: List all provisions in a law
- `get_provisions_batch`: Bulk retrieval for RAG applications
## Development
### Reference Tools
- `get_cross_references`: Find references from/to provisions
- `resolve_reference`: Parse legal reference strings (e.g., "lov/2014-06-20-42/§8")
### Running the Stack
## Skills Integration
The system loads Norwegian legal research skills that ensure:
- Proper citation standards (Lovdata URL formatting)
- Appropriate legal terminology usage
- Clear distinction between information and legal advice
- Systematic amendment tracking
- Cross-reference analysis
## Implementation Plan
### Phase 1: Core Infrastructure
1. **Project Structure Setup**
- Create backend (FastAPI) and frontend (Next.js) directories
- Set up Python virtual environment and Node.js dependencies
- Configure development tooling (linting, testing, formatting)
2. **LLM Provider Abstraction**
- Create abstract base class for LLM providers
- Implement OpenAI, Anthropic, and Google Gemini clients
- Add tool calling support and response streaming
- Implement provider switching logic
3. **MCP Server Integration**
- Build MCP client to connect to lovdata-ai server
- Create tool registry and execution pipeline
- Add error handling and retry logic
- Implement tool result formatting for LLM consumption
### Phase 2: Chat Functionality
4. **Backend API Development**
- Create chat session management endpoints
- Implement conversation history storage
- Add streaming response support
- Build health check and monitoring endpoints
5. **Skill System Implementation**
- Create skill loading and parsing system
- Implement skill application to LLM prompts
- Add skill validation and error handling
- Create skill management API endpoints
### Phase 3: Frontend Development
6. **Chat Interface**
- Build responsive chat UI with message history
- Implement real-time message streaming
- Add message formatting for legal citations
- Create conversation management (new chat, clear history)
7. **Model Selection UI**
- Create LLM provider and model selector
- Add API key management (secure storage)
- Implement model switching during conversations
- Add model capability indicators
8. **Tool Usage Visualization**
- Display when MCP tools are being used
- Show tool execution results in chat
- Add legal citation formatting
- Create expandable tool result views
### Phase 4: Deployment & Production
9. **Containerization**
- Create Dockerfiles for backend and frontend
- Set up Docker Compose for development
- Configure production Docker Compose
- Add environment variable management
10. **Deployment Configuration**
- Set up CI/CD pipeline (GitHub Actions)
- Configure cloud deployment (Railway/Render)
- Add reverse proxy configuration
- Implement SSL certificate management
11. **Monitoring & Error Handling**
- Add comprehensive logging
- Implement error tracking and reporting
- Create health check endpoints
- Add rate limiting and abuse protection
12. **Documentation**
- Create setup and deployment guides
- Document API endpoints
- Add user documentation
- Create troubleshooting guides
## Development Setup
### Prerequisites
- Python 3.12+
- Node.js 18+
- Docker and Docker Compose
- API keys for desired LLM providers
### Local Development
```bash
# Clone and setup
git clone <repository>
cd lovdata-chat
# Start all services (session-manager, docker-daemon, caddy)
docker-compose up --build
# Backend setup
cd backend
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Start in background
docker-compose up -d --build
# View logs
docker-compose logs -f session-manager
# Stop services
docker-compose down
```
### Session Management API
```bash
POST /api/sessions # Create new session
GET /api/sessions # List all sessions
GET /api/sessions/{id} # Get session info
DELETE /api/sessions/{id} # Delete session
POST /api/cleanup # Manual cleanup
GET /api/health # Health check
```
### Running Locally (without Docker)
```bash
cd session-manager
pip install -r requirements.txt
# Frontend setup
cd ../frontend
npm install
# Start development servers
docker-compose -f docker-compose.dev.yml up
uvicorn main:app --reload --host 0.0.0.0 --port 8000
```
### Environment Variables
### Testing
Test scripts live in `docker/scripts/` and are self-contained:
```bash
# Backend
LOVDATA_MCP_URL=http://localhost:8001
OPENAI_API_KEY=your_key_here
ANTHROPIC_API_KEY=your_key_here
GOOGLE_API_KEY=your_key_here
# Frontend
NEXT_PUBLIC_API_URL=http://localhost:8000
python docker/scripts/test-docker-service.py
python docker/scripts/test-async-docker.py
python docker/scripts/test-resource-limits.py
python docker/scripts/test-session-auth.py
python docker/scripts/test-database-persistence.py
python docker/scripts/test-container-health.py
python docker/scripts/test-http-connection-pool.py
python docker/scripts/test-host-ip-detection.py
python docker/scripts/test-structured-logging.py
```
## Deployment Options
### Building the OpenCode Image
### Cloud Deployment (Recommended)
- **Frontend**: Vercel or Netlify
- **Backend**: Railway, Render, or Fly.io
- **Database**: Use existing lovdata-ai PostgreSQL instance
```bash
make build MCP_SERVER=http://your-lovdata-server:8001
make run # Run interactively
make clean # Clean up
```
### Self-Hosted Deployment
- **Docker Compose**: Full stack containerization
- **Reverse Proxy**: Nginx or Caddy
- **SSL**: Let's Encrypt automatic certificates
## Environment Configuration
## Security Considerations
Required variables (see `.env.example`):
- API keys stored securely (environment variables, secret management)
- Rate limiting on chat endpoints
- Input validation and sanitization
- CORS configuration for frontend-backend communication
- Audit logging for legal tool usage
```bash
MCP_SERVER=http://localhost:8001 # External Lovdata MCP server URL
## Performance Optimization
# Docker TLS (if using TLS instead of socket)
DOCKER_TLS_VERIFY=1
DOCKER_CERT_PATH=/etc/docker/certs
DOCKER_HOST=tcp://host.docker.internal:2376
- Response streaming for real-time chat experience
- MCP tool result caching
- Conversation history pagination
- Lazy loading of legal document content
- CDN for static frontend assets
# Optional LLM keys (at least one required for chat)
OPENAI_API_KEY=...
ANTHROPIC_API_KEY=...
GOOGLE_API_KEY=...
```
## Future Enhancements
## Security
- User authentication and conversation persistence
- Advanced citation management and export
- Integration with legal research workflows
- Multi-language support beyond Norwegian
- Advanced analytics and usage tracking
**Docker socket**: Default setup uses socket mounting (`/var/run/docker.sock`). For production, enable TLS:
## Contributing
```bash
cd docker && DOCKER_ENV=production ./scripts/generate-certs.sh
./scripts/setup-docker-tls.sh
```
1. Follow the implementation plan phases
2. Ensure comprehensive testing for LLM integrations
3. Document API changes and new features
4. Maintain security best practices for API key handling
**Session isolation:**
- Each session gets a dedicated container
- Resource limits: 4GB RAM, 1 CPU core per container
- Max 3 concurrent sessions (configurable via `resource_manager.py`)
- Auto-cleanup after 60 minutes inactivity
- Token-based session authentication
---
## Further Documentation
**Status**: Planning phase complete. Ready for implementation.
**Next Steps**: Begin with Phase 1 - Project Structure Setup
- [`CLAUDE.md`](CLAUDE.md) — AI assistant guidance for working with this codebase
- [`LOW_PRIORITY_IMPROVEMENTS.md`](LOW_PRIORITY_IMPROVEMENTS.md) — Backlog of non-critical improvements
- [`docs/project-analysis.md`](docs/project-analysis.md) — Detailed architectural analysis
- `docker/*.md` — Implementation docs for individual components

View File

@@ -4,6 +4,83 @@
"autoupdate": false,
"model": "opencode/kimi-k2.5-free",
"plugin": [],
// Global permissions — defense-in-depth safety net across ALL agents
"permission": {
"bash": "deny",
"edit": "deny",
"webfetch": "deny",
"read": "allow",
"grep": "allow",
"glob": "allow",
"list": "allow",
"todoread": "allow",
"todowrite": "allow",
"lsp": "deny",
"task": "allow",
"skill": "allow"
},
"agent": {
// Primary agent — locked to read-only + Lovdata MCP tools
"build": {
"mode": "primary",
"prompt": "{file:./prompts/legal-research.md}",
"temperature": 0.2,
"tools": {
"bash": false,
"write": false,
"edit": false,
"patch": false,
"webfetch": false,
"read": true,
"grep": true,
"glob": true,
"list": true,
"todowrite": true,
"todoread": true
}
},
// Disable plan agent — users shouldn't switch modes
"plan": {
"mode": "primary",
"disable": true
},
// Lock down general subagent — it normally has full tool access
"general": {
"mode": "subagent",
"tools": {
"bash": false,
"write": false,
"edit": false,
"patch": false,
"webfetch": false,
"read": true,
"grep": true,
"glob": true,
"list": true
}
},
// Explore subagent is already read-only, but be explicit
"explore": {
"mode": "subagent",
"tools": {
"bash": false,
"write": false,
"edit": false,
"patch": false,
"webfetch": false,
"read": true,
"grep": true,
"glob": true,
"list": true
}
}
},
"mcp": {
"sequential-thinking": {
"type": "local",

View File

@@ -0,0 +1,25 @@
You are a Norwegian legal research assistant powered by Lovdata.
Your role is to help users research Norwegian laws (lover), regulations (forskrifter), and legal concepts using the Lovdata MCP tools available to you.
## What you can do
- Search and retrieve Norwegian laws and regulations via Lovdata
- Explain legal concepts in clear Norwegian (or English when asked)
- Provide proper citations with Lovdata URLs
- Trace cross-references between legal provisions
- Track amendment history
## What you cannot do
- You cannot execute shell commands, create files, or modify files
- You are a research tool, not a lawyer. Always recommend professional legal consultation for specific legal situations
- Clearly distinguish between legal information and legal advice
## Guidelines
- Always cite specific Lovdata URLs with amendment dates
- Distinguish between laws (lover) and regulations (forskrifter)
- Use the correct document ID prefixes: `NL/lov/` for laws, `SF/forskrift/` for regulations
- Consider the hierarchical legal structure and cross-references
- Respond in the same language the user writes in (Norwegian or English)

View File

@@ -30,6 +30,8 @@ services:
# Host configuration
- DOCKER_HOST_IP=${DOCKER_HOST_IP:-host.docker.internal}
- DOCKER_TLS_PORT=${DOCKER_TLS_PORT:-2376}
# Disable database storage (use in-memory)
- USE_DATABASE_STORAGE=false
networks:
- lovdata-network
restart: unless-stopped

View File

@@ -9,6 +9,12 @@ http://localhost {
reverse_proxy session-manager:8000
}
# OpenCode internal session API (without session_id in path)
# Must be BEFORE /session/{session_id}* to match first
handle /session {
reverse_proxy session-manager:8000
}
# Session-specific routing - proxy to session manager for dynamic routing
handle /session/{session_id}* {
reverse_proxy session-manager:8000
@@ -51,6 +57,124 @@ http://localhost {
reverse_proxy session-manager:8000
}
# Additional OpenCode API endpoints for root-path operation
handle /agent {
reverse_proxy session-manager:8000
}
handle /agent/* {
reverse_proxy session-manager:8000
}
handle /config {
reverse_proxy session-manager:8000
}
handle /config/* {
reverse_proxy session-manager:8000
}
handle /model {
reverse_proxy session-manager:8000
}
handle /model/* {
reverse_proxy session-manager:8000
}
handle /thread/* {
reverse_proxy session-manager:8000
}
handle /chat/* {
reverse_proxy session-manager:8000
}
handle /tree {
reverse_proxy session-manager:8000
}
handle /tree/* {
reverse_proxy session-manager:8000
}
handle /conversation {
reverse_proxy session-manager:8000
}
handle /conversation/* {
reverse_proxy session-manager:8000
}
handle /project/* {
reverse_proxy session-manager:8000
}
# OpenCode communication endpoints for message sending
handle /command {
reverse_proxy session-manager:8000
}
handle /command/* {
reverse_proxy session-manager:8000
}
handle /mcp {
reverse_proxy session-manager:8000
}
handle /mcp/* {
reverse_proxy session-manager:8000
}
handle /lsp {
reverse_proxy session-manager:8000
}
handle /lsp/* {
reverse_proxy session-manager:8000
}
handle /vcs {
reverse_proxy session-manager:8000
}
handle /vcs/* {
reverse_proxy session-manager:8000
}
handle /permission {
reverse_proxy session-manager:8000
}
handle /permission/* {
reverse_proxy session-manager:8000
}
handle /question {
reverse_proxy session-manager:8000
}
handle /question/* {
reverse_proxy session-manager:8000
}
handle /event {
reverse_proxy session-manager:8000
}
handle /event/* {
reverse_proxy session-manager:8000
}
handle /status {
reverse_proxy session-manager:8000
}
handle /status/* {
reverse_proxy session-manager:8000
}
# Health check
handle /health {
reverse_proxy session-manager:8000

View File

@@ -5,7 +5,6 @@ from fastapi import FastAPI
from config import USE_ASYNC_DOCKER, USE_DATABASE_STORAGE
from session_manager import session_manager
from async_docker_client import get_async_docker_client
from http_pool import init_http_pool, shutdown_http_pool
from database import init_database, shutdown_database, run_migrations
from container_health import (
@@ -43,12 +42,9 @@ async def lifespan(app: FastAPI):
session_manager._load_sessions_from_file()
try:
docker_client = None
if USE_ASYNC_DOCKER:
async with get_async_docker_client() as client:
docker_client = client._docker if hasattr(client, "_docker") else None
else:
docker_client = session_manager.docker_service
# Use the session manager's docker_service for health monitoring
# This ensures the docker client stays alive for the lifetime of the application
docker_client = session_manager.docker_service
await start_container_health_monitoring(session_manager, docker_client)
logger.info("Container health monitoring started")

View File

@@ -199,6 +199,23 @@ class AsyncDockerClient:
except DockerError:
return None
async def _get_container_info(self, container_id: str) -> Optional[Dict[str, Any]]:
"""
Get detailed container information (equivalent to docker inspect).
Returns the full container info dict including State, Config, etc.
"""
try:
container = await self._docker.containers.get(container_id)
if container:
# show() returns the full container inspect data
return await container.show()
except DockerError as e:
logger.debug(f"Failed to get container info for {container_id}: {e}")
except Exception as e:
logger.debug(f"Unexpected error getting container info: {e}")
return None
async def list_containers(
self, all: bool = False, filters: Optional[Dict[str, Any]] = None
) -> List[DockerContainer]:

View File

@@ -129,23 +129,30 @@ class ContainerHealthMonitor:
"""Main monitoring loop."""
while self._monitoring:
try:
await self._perform_health_checks()
await self._check_all_containers()
await self._cleanup_old_history()
except Exception as e:
logger.error("Error in health monitoring loop", extra={"error": str(e)})
await asyncio.sleep(self.check_interval)
async def _perform_health_checks(self):
async def _check_all_containers(self):
"""Perform health checks on all running containers."""
if not self.session_manager:
return
# Get all running sessions
from datetime import datetime, timedelta
# Startup grace period - don't check containers that started recently
startup_grace_period = timedelta(seconds=60)
now = datetime.now()
# Get all running sessions that are past the startup grace period
running_sessions = [
session
for session in self.session_manager.sessions.values()
if session.status == "running"
if session.status == "running"
and (now - session.created_at) > startup_grace_period
]
if not running_sessions:
@@ -263,23 +270,30 @@ class ContainerHealthMonitor:
async def _get_container_info(self, container_id: str) -> Optional[Dict[str, Any]]:
"""Get container information from Docker."""
try:
if self.docker_client:
# Try async Docker client first
container = await self.docker_client.get_container(container_id)
if hasattr(container, "_container"):
return await container._container.show()
elif hasattr(container, "show"):
return await container.show()
else:
# Fallback to sync client if available
if (
hasattr(self.session_manager, "docker_client")
and self.session_manager.docker_client
):
container = self.session_manager.docker_client.containers.get(
container_id
)
return container.attrs
# Use session_manager.docker_service for consistent container access
if (
self.session_manager
and hasattr(self.session_manager, "docker_service")
and self.session_manager.docker_service
):
container_info = await self.session_manager.docker_service.get_container_info(container_id)
if container_info:
# Convert ContainerInfo to dict format expected by health check
return {
"State": {
"Status": container_info.status,
"Health": {"Status": container_info.health_status} if container_info.health_status else {}
}
}
elif self.docker_client and hasattr(self.docker_client, "get_container_info"):
container_info = await self.docker_client.get_container_info(container_id)
if container_info:
return {
"State": {
"Status": container_info.status,
"Health": {"Status": container_info.health_status} if container_info.health_status else {}
}
}
except Exception as e:
logger.debug(
f"Failed to get container info for {container_id}",
@@ -384,8 +398,8 @@ class ContainerHealthMonitor:
# Trigger container restart through session manager
if self.session_manager:
# Create new container for the session
await self.session_manager.create_session()
# Restart container for the SAME session (preserves session_id)
await self.session_manager.restart_session(session_id)
logger.info(
"Container restart initiated",
extra={
@@ -418,17 +432,22 @@ class ContainerHealthMonitor:
async def _stop_container(self, container_id: str):
"""Stop a container."""
try:
if self.docker_client:
container = await self.docker_client.get_container(container_id)
await self.docker_client.stop_container(container, timeout=10)
elif (
hasattr(self.session_manager, "docker_client")
and self.session_manager.docker_client
# Use session_manager.docker_service for container operations
# docker_service.stop_container takes container_id as a string
if (
self.session_manager
and hasattr(self.session_manager, "docker_service")
and self.session_manager.docker_service
):
container = self.session_manager.docker_client.containers.get(
container_id
await self.session_manager.docker_service.stop_container(container_id, timeout=10)
elif self.docker_client and hasattr(self.docker_client, "stop_container"):
# If docker_client is docker_service, use it directly
await self.docker_client.stop_container(container_id, timeout=10)
else:
logger.warning(
"No docker client available to stop container",
extra={"container_id": container_id},
)
container.stop(timeout=10)
except Exception as e:
logger.warning(
"Failed to stop container during restart",

View File

@@ -84,6 +84,192 @@ async def proxy_file_path_to_session(request: Request, path: str):
return await proxy_to_session(request, session_id, f"file/{path}")
# Additional OpenCode API endpoints for root-path operation
@router.api_route("/project/{path:path}", methods=ALL_METHODS)
async def proxy_project_path_to_session(request: Request, path: str):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, f"project/{path}")
@router.api_route("/agent", methods=ALL_METHODS)
async def proxy_agent_to_session(request: Request):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, "agent")
@router.api_route("/agent/{path:path}", methods=ALL_METHODS)
async def proxy_agent_path_to_session(request: Request, path: str):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, f"agent/{path}")
@router.api_route("/config", methods=ALL_METHODS)
async def proxy_config_to_session(request: Request):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, "config")
@router.api_route("/config/{path:path}", methods=ALL_METHODS)
async def proxy_config_path_to_session(request: Request, path: str):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, f"config/{path}")
@router.api_route("/model", methods=ALL_METHODS)
async def proxy_model_to_session(request: Request):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, "model")
@router.api_route("/model/{path:path}", methods=ALL_METHODS)
async def proxy_model_path_to_session(request: Request, path: str):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, f"model/{path}")
@router.api_route("/thread/{path:path}", methods=ALL_METHODS)
async def proxy_thread_path_to_session(request: Request, path: str):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, f"thread/{path}")
@router.api_route("/chat/{path:path}", methods=ALL_METHODS)
async def proxy_chat_path_to_session(request: Request, path: str):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, f"chat/{path}")
@router.api_route("/tree", methods=ALL_METHODS)
async def proxy_tree_to_session(request: Request):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, "tree")
@router.api_route("/tree/{path:path}", methods=ALL_METHODS)
async def proxy_tree_path_to_session(request: Request, path: str):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, f"tree/{path}")
@router.api_route("/conversation", methods=ALL_METHODS)
async def proxy_conversation_to_session(request: Request):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, "conversation")
@router.api_route("/conversation/{path:path}", methods=ALL_METHODS)
async def proxy_conversation_path_to_session(request: Request, path: str):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, f"conversation/{path}")
# OpenCode session and communication endpoints for message sending
@router.api_route("/command", methods=ALL_METHODS)
async def proxy_command_to_session(request: Request):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, "command")
@router.api_route("/command/{path:path}", methods=ALL_METHODS)
async def proxy_command_path_to_session(request: Request, path: str):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, f"command/{path}")
@router.api_route("/mcp", methods=ALL_METHODS)
async def proxy_mcp_to_session(request: Request):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, "mcp")
@router.api_route("/mcp/{path:path}", methods=ALL_METHODS)
async def proxy_mcp_path_to_session(request: Request, path: str):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, f"mcp/{path}")
@router.api_route("/lsp", methods=ALL_METHODS)
async def proxy_lsp_to_session(request: Request):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, "lsp")
@router.api_route("/lsp/{path:path}", methods=ALL_METHODS)
async def proxy_lsp_path_to_session(request: Request, path: str):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, f"lsp/{path}")
@router.api_route("/vcs", methods=ALL_METHODS)
async def proxy_vcs_to_session(request: Request):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, "vcs")
@router.api_route("/vcs/{path:path}", methods=ALL_METHODS)
async def proxy_vcs_path_to_session(request: Request, path: str):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, f"vcs/{path}")
@router.api_route("/permission", methods=ALL_METHODS)
async def proxy_permission_to_session(request: Request):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, "permission")
@router.api_route("/permission/{path:path}", methods=ALL_METHODS)
async def proxy_permission_path_to_session(request: Request, path: str):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, f"permission/{path}")
@router.api_route("/question", methods=ALL_METHODS)
async def proxy_question_to_session(request: Request):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, "question")
@router.api_route("/question/{path:path}", methods=ALL_METHODS)
async def proxy_question_path_to_session(request: Request, path: str):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, f"question/{path}")
@router.api_route("/event", methods=ALL_METHODS)
async def proxy_event_to_session(request: Request):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, "event")
@router.api_route("/event/{path:path}", methods=ALL_METHODS)
async def proxy_event_path_to_session(request: Request, path: str):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, f"event/{path}")
@router.api_route("/status", methods=ALL_METHODS)
async def proxy_status_to_session(request: Request):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, "status")
@router.api_route("/status/{path:path}", methods=ALL_METHODS)
async def proxy_status_path_to_session(request: Request, path: str):
session_id = get_session_from_cookie(request)
return await proxy_to_session(request, session_id, f"status/{path}")
# OpenCode internal session endpoint (different from our container sessions)
# Must be defined BEFORE /session/{session_id}/{path} to match first
@router.api_route("/session", methods=ALL_METHODS)
async def proxy_internal_session_to_session(request: Request):
session_id = get_session_from_cookie(request)
# Proxy the request directly, preserving query params
path = "session"
return await proxy_to_session(request, session_id, path)
@router.api_route("/session/{session_id}/{path:path}", methods=ALL_METHODS)
async def proxy_to_session(request: Request, session_id: str, path: str):
start_time = time.time()

View File

@@ -83,8 +83,8 @@ class SessionTokenManager:
session_data = self._session_tokens[session_id]
# Check if token matches
if session_data["token"] != token:
# Check if token matches using constant-time comparison to prevent timing attacks
if not secrets.compare_digest(session_data["token"], token):
return False, "Invalid token"
# Check if token has expired
@@ -212,7 +212,7 @@ def revoke_session_auth_token(session_id: str) -> bool:
def rotate_session_auth_token(session_id: str) -> Optional[str]:
"""Rotate a session authentication token."""
return _session_token_manager.rotate_session_auth_token(session_id)
return _session_token_manager.rotate_session_token(session_id)
def cleanup_expired_auth_tokens() -> int:

View File

@@ -367,6 +367,86 @@ class SessionManager:
async def list_sessions(self) -> List[SessionData]:
return list(self.sessions.values())
async def restart_session(self, session_id: str) -> Optional[SessionData]:
"""Restart a session's container while preserving the session ID.
Unlike create_session(), this reuses the existing session data
and only creates a new container, maintaining session ID continuity.
This method removes the old container to free up the port.
"""
session = await self.get_session(session_id)
if not session:
logger.error(
"Cannot restart session: not found",
extra={"session_id": session_id},
)
return None
old_container_id = session.container_id
logger.info(
"Restarting session container",
extra={"session_id": session_id, "old_container_id": old_container_id},
)
# Stop and remove old container to free up the port
if old_container_id and self.docker_service:
try:
logger.info(
"Stopping old container for restart",
extra={"session_id": session_id, "container_id": old_container_id},
)
await self.docker_service.stop_container(old_container_id)
except Exception as e:
logger.warning(
"Failed to stop old container (may already be stopped)",
extra={"session_id": session_id, "container_id": old_container_id, "error": str(e)},
)
try:
logger.info(
"Removing old container for restart",
extra={"session_id": session_id, "container_id": old_container_id},
)
await self.docker_service.remove_container(old_container_id, force=True)
except Exception as e:
logger.warning(
"Failed to remove old container",
extra={"session_id": session_id, "container_id": old_container_id, "error": str(e)},
)
# Generate new container name for the restart
new_container_name = f"opencode-{session_id}-{uuid.uuid4().hex[:8]}"
session.container_name = new_container_name
session.container_id = None # Clear old container_id
session.status = "starting"
# Update session in store before starting container
self.sessions[session_id] = session
if USE_DATABASE_STORAGE:
try:
await SessionModel.update_session(
session_id,
{
"container_name": new_container_name,
"container_id": None,
"status": "starting",
},
)
except Exception as e:
logger.error(
"Failed to update session in database during restart",
extra={"session_id": session_id, "error": str(e)},
)
# Start new container for this session
if USE_ASYNC_DOCKER:
asyncio.create_task(self._start_container_async(session))
else:
asyncio.create_task(self._start_container_sync(session))
return session
async def list_containers_async(self, all: bool = False) -> List:
return await self.docker_service.list_containers(all=all)