Compare commits
10 Commits
75226b2fe6
...
ef9a071aa1
| Author | SHA1 | Date |
|---|---|---|
|
|
ef9a071aa1 | |
|
|
e2fdfa75d7 | |
|
|
4119fdcba7 | |
|
|
c669b3bb24 | |
|
|
dd0885b3a5 | |
|
|
28bb208469 | |
|
|
8f356cdf51 | |
|
|
03705acf3a | |
|
|
b5c11baece | |
|
|
85af540076 |
|
|
@ -0,0 +1,35 @@
|
|||
name: Unit Tests
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, synchronize, reopened, ready_for_review]
|
||||
|
||||
concurrency:
|
||||
group: unit-tests-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
backend-unit-tests:
|
||||
if: github.event.pull_request.draft == false
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 15
|
||||
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.12'
|
||||
|
||||
- name: Install uv
|
||||
uses: astral-sh/setup-uv@v6
|
||||
|
||||
- name: Install backend dependencies
|
||||
working-directory: backend
|
||||
run: uv sync --group dev
|
||||
|
||||
- name: Run unit tests of backend
|
||||
working-directory: backend
|
||||
run: uv run pytest tests/test_provisioner_kubeconfig.py tests/test_docker_sandbox_mode_detection.py
|
||||
|
|
@ -0,0 +1,343 @@
|
|||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## Project Overview
|
||||
|
||||
DeerFlow is an open-source **AI super agent harness** built on LangGraph/LangChain. It's a full-stack application that orchestrates sub-agents, memory, sandboxes, and extensible skills to perform complex multi-step tasks.
|
||||
|
||||
**Architecture**:
|
||||
- **LangGraph Server** (port 2024): Agent runtime and workflow execution
|
||||
- **Gateway API** (port 8001): FastAPI REST API for models, MCP, skills, memory, artifacts, uploads
|
||||
- **Frontend** (port 3000): Next.js 16 web interface
|
||||
- **Nginx** (port 2026): Unified reverse proxy entry point
|
||||
- **Provisioner** (port 8002, optional): Started only when sandbox is configured for provisioner/Kubernetes mode
|
||||
|
||||
**Key Technologies**:
|
||||
- **Backend**: Python 3.12+, LangGraph/LangChain, FastAPI, uv package manager
|
||||
- **Frontend**: Next.js 16, React 19, TypeScript 5.8, Tailwind CSS 4, pnpm
|
||||
- **AI/ML**: Model-agnostic (any OpenAI-compatible API), multi-model support with thinking/vision capabilities
|
||||
|
||||
## Commands
|
||||
|
||||
### Root Directory (Full Application)
|
||||
```bash
|
||||
make check # Check system requirements (Node.js 22+, pnpm, uv, nginx)
|
||||
make config # Generate local configuration files from templates
|
||||
make install # Install all dependencies (frontend + backend)
|
||||
make setup-sandbox # Pre-pull sandbox container image (recommended for Docker sandbox)
|
||||
make dev # Start all services (LangGraph + Gateway + Frontend + Nginx) on localhost:2026
|
||||
make stop # Stop all running services
|
||||
make clean # Clean up processes and temporary files
|
||||
|
||||
# Docker Development (Recommended)
|
||||
make docker-init # Build custom k3s image with pre-cached sandbox image
|
||||
make docker-start # Start Docker services (mode-aware from config.yaml)
|
||||
make docker-stop # Stop Docker development services
|
||||
make docker-logs # View Docker development logs
|
||||
```
|
||||
|
||||
### Backend Directory (`/backend/`)
|
||||
```bash
|
||||
make install # Install backend dependencies with uv
|
||||
make dev # Run LangGraph server only (port 2024)
|
||||
make gateway # Run Gateway API only (port 8001)
|
||||
make lint # Lint with ruff
|
||||
make format # Format code with ruff
|
||||
uv run pytest # Run backend tests
|
||||
```
|
||||
|
||||
### Frontend Directory (`/frontend/`)
|
||||
```bash
|
||||
pnpm dev # Dev server with Turbopack (http://localhost:3000)
|
||||
pnpm build # Production build
|
||||
pnpm check # Lint + type check (run before committing)
|
||||
pnpm lint # ESLint only
|
||||
pnpm lint:fix # ESLint with auto-fix
|
||||
pnpm typecheck # TypeScript type check (tsc --noEmit)
|
||||
pnpm start # Start production server
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
### Microservices Architecture
|
||||
```
|
||||
┌──────────────────────────────────────┐
|
||||
│ Nginx (Port 2026) │
|
||||
│ Unified reverse proxy │
|
||||
└───────┬──────────────────┬───────────┘
|
||||
│ │
|
||||
/api/langgraph/* │ │ /api/* (other)
|
||||
▼ ▼
|
||||
┌────────────────────┐ ┌────────────────────────┐
|
||||
│ LangGraph Server │ │ Gateway API (8001) │
|
||||
│ (Port 2024) │ │ FastAPI REST │
|
||||
│ │ │ │
|
||||
│ ┌────────────────┐ │ │ Models, MCP, Skills, │
|
||||
│ │ Lead Agent │ │ │ Memory, Uploads, │
|
||||
│ │ ┌──────────┐ │ │ │ Artifacts │
|
||||
│ │ │Middleware│ │ │ └────────────────────────┘
|
||||
│ │ │ Chain │ │ │
|
||||
│ │ └──────────┘ │ │
|
||||
│ │ ┌──────────┐ │ │
|
||||
│ │ │ Tools │ │ │
|
||||
│ │ └──────────┘ │ │
|
||||
│ │ ┌──────────┐ │ │
|
||||
│ │ │Subagents │ │ │
|
||||
│ │ └──────────┘ │ │
|
||||
│ └────────────────┘ │
|
||||
└────────────────────┘
|
||||
```
|
||||
|
||||
### Core Components
|
||||
|
||||
**Lead Agent** (`src/agents/lead_agent/agent.py`):
|
||||
- Entry point: `make_lead_agent(config: RunnableConfig)` registered in `langgraph.json`
|
||||
- Dynamic model selection via `create_chat_model()` with thinking/vision support
|
||||
- Tools loaded via `get_available_tools()` - combines sandbox, built-in, MCP, community, and subagent tools
|
||||
|
||||
**Middleware Chain** (11 middlewares in strict order):
|
||||
1. **ThreadDataMiddleware** - Creates per-thread isolated directories
|
||||
2. **UploadsMiddleware** - Tracks and injects newly uploaded files
|
||||
3. **SandboxMiddleware** - Acquires sandbox, stores `sandbox_id` in state
|
||||
4. **DanglingToolCallMiddleware** - Injects placeholder ToolMessages for interrupted tool calls
|
||||
5. **SummarizationMiddleware** - Context reduction when approaching token limits
|
||||
6. **TodoListMiddleware** - Task tracking with `write_todos` tool (plan mode)
|
||||
7. **TitleMiddleware** - Auto-generates thread title
|
||||
8. **MemoryMiddleware** - Queues conversations for async memory update
|
||||
9. **ViewImageMiddleware** - Injects base64 image data before LLM call (vision support)
|
||||
10. **SubagentLimitMiddleware** - Truncates excess `task` tool calls to enforce concurrency limits
|
||||
11. **ClarificationMiddleware** - Intercepts `ask_clarification` tool calls, interrupts via `Command(goto=END)`
|
||||
|
||||
**Sandbox System** (`src/sandbox/`):
|
||||
- **Interface**: Abstract `Sandbox` with `execute_command`, `read_file`, `write_file`, `list_dir`
|
||||
- **Provider Pattern**: `SandboxProvider` with `acquire`, `get`, `release` lifecycle
|
||||
- **Implementations**: `LocalSandboxProvider` (local filesystem), `AioSandboxProvider` (Docker-based isolation)
|
||||
- **Virtual Path System**: Agent sees `/mnt/user-data/{workspace,uploads,outputs}`, `/mnt/skills` mapped to physical paths
|
||||
|
||||
**Subagent System** (`src/subagents/`):
|
||||
- **Built-in Agents**: `general-purpose` (all tools except `task`) and `bash` (command specialist)
|
||||
- **Concurrency**: `MAX_CONCURRENT_SUBAGENTS = 3` enforced by `SubagentLimitMiddleware`
|
||||
- **Execution**: Dual thread pool - `_scheduler_pool` (3 workers) + `_execution_pool` (3 workers)
|
||||
- **Flow**: `task()` tool → `SubagentExecutor` → background thread → poll 5s → SSE events → result
|
||||
|
||||
**Memory System** (`src/agents/memory/`):
|
||||
- **Components**: `updater.py` (LLM-based memory updates), `queue.py` (debounced update queue), `prompt.py` (templates)
|
||||
- **Data Structure**: Stored in `backend/.deer-flow/memory.json` with user context, history, and facts
|
||||
- **Workflow**: MemoryMiddleware filters messages → queue debounces (30s) → background LLM extracts updates → atomic file I/O
|
||||
|
||||
**Tool System** (`src/tools/`):
|
||||
- **Config-defined tools**: Resolved from `config.yaml` via `resolve_variable()`
|
||||
- **MCP tools**: From enabled MCP servers (lazy initialized, cached with mtime invalidation)
|
||||
- **Built-in tools**: `present_files`, `ask_clarification`, `view_image` (vision support)
|
||||
- **Subagent tool**: `task` (delegate to subagent)
|
||||
- **Community tools**: `tavily/` (web search/fetch), `jina_ai/` (web fetch), `firecrawl/` (scraping), `image_search/` (DuckDuckGo)
|
||||
|
||||
**Skills System** (`src/skills/`):
|
||||
- **Location**: `deer-flow/skills/{public,custom}/`
|
||||
- **Format**: Directory with `SKILL.md` (YAML frontmatter: name, description, license, allowed-tools)
|
||||
- **Loading**: `load_skills()` scans directories, parses SKILL.md, reads enabled state from extensions_config.json
|
||||
- **Installation**: `POST /api/skills/install` extracts .skill ZIP archive to custom/ directory
|
||||
|
||||
### Configuration System
|
||||
|
||||
**Main Configuration** (`config.yaml` in project root):
|
||||
- Setup: Copy `config.example.yaml` to `config.yaml`
|
||||
- Configuration priority: explicit `config_path` → `DEER_FLOW_CONFIG_PATH` env → `config.yaml` in current dir → `config.yaml` in parent dir
|
||||
- Config values starting with `$` are resolved as environment variables (e.g., `$OPENAI_API_KEY`)
|
||||
|
||||
**Extensions Configuration** (`extensions_config.json` in project root):
|
||||
- Contains MCP servers and skills configuration
|
||||
- Configuration priority: explicit `config_path` → `DEER_FLOW_EXTENSIONS_CONFIG_PATH` env → `extensions_config.json` in current dir → `extensions_config.json` in parent dir
|
||||
|
||||
**Key Config Sections**:
|
||||
- `models[]` - LLM configs with `use` class path, `supports_thinking`, `supports_vision`
|
||||
- `tools[]` - Tool configs with `use` variable path and `group`
|
||||
- `tool_groups[]` - Logical groupings for tools
|
||||
- `sandbox.use` - Sandbox provider class path
|
||||
- `skills.path` / `skills.container_path` - Host and container paths to skills directory
|
||||
- `title` - Auto-title generation
|
||||
- `summarization` - Context summarization
|
||||
- `subagents.enabled` - Master switch for subagent delegation
|
||||
- `memory` - Memory system configuration
|
||||
|
||||
### Gateway API (`src/gateway/`)
|
||||
|
||||
FastAPI application on port 8001 with health check at `GET /health`.
|
||||
|
||||
**Routers**:
|
||||
- **Models** (`/api/models`): `GET /` (list models), `GET /{name}` (model details)
|
||||
- **MCP** (`/api/mcp`): `GET /config` (get config), `PUT /config` (update config)
|
||||
- **Skills** (`/api/skills`): `GET /` (list), `GET /{name}` (details), `PUT /{name}` (update enabled), `POST /install` (install from .skill archive)
|
||||
- **Memory** (`/api/memory`): `GET /` (memory data), `POST /reload` (force reload), `GET /config` (config), `GET /status` (config + data)
|
||||
- **Uploads** (`/api/threads/{id}/uploads`): `POST /` (upload files with auto-conversion), `GET /list` (list), `DELETE /{filename}` (delete)
|
||||
- **Artifacts** (`/api/threads/{id}/artifacts`): `GET /{path}` (serve artifacts), `?download=true` for file download
|
||||
|
||||
### Frontend Architecture (`/frontend/`)
|
||||
|
||||
**Source Layout** (`src/`):
|
||||
- **`app/`** - Next.js App Router: `/` (landing), `/workspace/chats/[thread_id]` (chat)
|
||||
- **`components/`** - React components: `ui/` (Shadcn UI), `ai-elements/` (Vercel AI SDK), `workspace/` (chat), `landing/` (landing)
|
||||
- **`core/`** - Business logic: `threads/` (state management), `api/` (LangGraph client), `artifacts/` (loading/caching), `i18n/`, `settings/`, `memory/`, `skills/`, `messages/`, `mcp/`, `models/`
|
||||
- **`hooks/`** - Shared React hooks
|
||||
- **`lib/`** - Utilities (`cn()` from clsx + tailwind-merge)
|
||||
- **`server/`** - Server-side code (better-auth, not yet active)
|
||||
- **`styles/`** - Global CSS with Tailwind v4 `@import` syntax
|
||||
|
||||
**Data Flow**:
|
||||
1. User input → thread hooks (`core/threads/hooks.ts`) → LangGraph SDK streaming
|
||||
2. Stream events update thread state (messages, artifacts, todos)
|
||||
3. TanStack Query manages server state; localStorage stores user settings
|
||||
4. Components subscribe to thread state and render updates
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Running the Full Application
|
||||
From the **project root** directory:
|
||||
```bash
|
||||
make dev
|
||||
```
|
||||
This starts all services and makes the application available at `http://localhost:2026`.
|
||||
|
||||
**Nginx routing**:
|
||||
- `/api/langgraph/*` → LangGraph Server (2024)
|
||||
- `/api/*` (other) → Gateway API (8001)
|
||||
- `/` (non-API) → Frontend (3000)
|
||||
|
||||
### Running Services Separately
|
||||
From the **backend** directory:
|
||||
```bash
|
||||
# Terminal 1: LangGraph server
|
||||
make dev
|
||||
|
||||
# Terminal 2: Gateway API
|
||||
make gateway
|
||||
```
|
||||
|
||||
Direct access (without nginx):
|
||||
- LangGraph: `http://localhost:2024`
|
||||
- Gateway: `http://localhost:8001`
|
||||
|
||||
### Docker Development (Recommended)
|
||||
```bash
|
||||
make docker-init # Build images and install dependencies (first time)
|
||||
make docker-start # Start all services with hot-reload
|
||||
```
|
||||
Access at `http://localhost:2026`. Services automatically restart on code changes.
|
||||
|
||||
## Key Features
|
||||
|
||||
### File Upload
|
||||
- Endpoint: `POST /api/threads/{thread_id}/uploads`
|
||||
- Supports: PDF, PPT, Excel, Word documents (auto-converted via `markitdown`)
|
||||
- Files stored in thread-isolated directories
|
||||
- Agent receives uploaded file list via `UploadsMiddleware`
|
||||
|
||||
### Plan Mode
|
||||
- Controlled via runtime config: `config.configurable.is_plan_mode = True`
|
||||
- Provides `write_todos` tool for task tracking
|
||||
- One task in_progress at a time, real-time updates
|
||||
|
||||
### Context Summarization
|
||||
- Automatic conversation summarization when approaching token limits
|
||||
- Configured in `config.yaml` under `summarization` key
|
||||
- Trigger types: tokens, messages, or fraction of max input
|
||||
- Keeps recent messages while summarizing older ones
|
||||
|
||||
### Vision Support
|
||||
- For models with `supports_vision: true`
|
||||
- `ViewImageMiddleware` processes images in conversation
|
||||
- `view_image_tool` added to agent's toolset
|
||||
- Images automatically converted to base64 and injected into state
|
||||
|
||||
## Code Style
|
||||
|
||||
### Backend (Python)
|
||||
- Uses `ruff` for linting and formatting
|
||||
- Line length: 240 characters
|
||||
- Python 3.12+ with type hints
|
||||
- Double quotes, space indentation
|
||||
|
||||
### Frontend (TypeScript)
|
||||
- **Imports**: Enforced ordering (builtin → external → internal → parent → sibling), alphabetized
|
||||
- **Unused variables**: Prefix with `_`
|
||||
- **Class names**: Use `cn()` from `@/lib/utils` for conditional Tailwind classes
|
||||
- **Path alias**: `@/*` maps to `src/*`
|
||||
- **Components**: `ui/` and `ai-elements/` are generated from registries - don't manually edit these
|
||||
|
||||
## Testing
|
||||
|
||||
### Backend Tests
|
||||
```bash
|
||||
cd backend
|
||||
uv run pytest
|
||||
```
|
||||
**Regression tests** (run in CI for every PR):
|
||||
- `tests/test_docker_sandbox_mode_detection.py` - mode detection from `config.yaml`
|
||||
- `tests/test_provisioner_kubeconfig.py` - kubeconfig file/directory handling
|
||||
|
||||
### Frontend Tests
|
||||
No test framework is currently configured.
|
||||
|
||||
## Documentation Update Policy
|
||||
|
||||
**CRITICAL: Always update README.md and CLAUDE.md after every code change**
|
||||
|
||||
When making code changes, you MUST update the relevant documentation:
|
||||
- Update `README.md` for user-facing changes (features, setup, usage instructions)
|
||||
- Update `CLAUDE.md` for development changes (architecture, commands, workflows, internal systems)
|
||||
- Keep documentation synchronized with the codebase at all times
|
||||
- Ensure accuracy and timeliness of all documentation
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
deer-flow/
|
||||
├── Makefile # Root commands (check, install, dev, stop)
|
||||
├── config.yaml # Main application configuration
|
||||
├── extensions_config.json # MCP servers and skills configuration
|
||||
├── backend/ # Backend application
|
||||
│ ├── Makefile # Backend-only commands (dev, gateway, lint)
|
||||
│ ├── langgraph.json # LangGraph server configuration
|
||||
│ ├── src/
|
||||
│ │ ├── agents/ # LangGraph agent system
|
||||
│ │ │ ├── lead_agent/ # Main agent (factory + system prompt)
|
||||
│ │ │ ├── middlewares/ # 11 middleware components
|
||||
│ │ │ ├── memory/ # Memory extraction, queue, prompts
|
||||
│ │ │ └── thread_state.py # ThreadState schema
|
||||
│ │ ├── gateway/ # FastAPI Gateway API
|
||||
│ │ │ ├── app.py # FastAPI application
|
||||
│ │ │ └── routers/ # 6 route modules
|
||||
│ │ ├── sandbox/ # Sandbox execution system
|
||||
│ │ │ ├── local/ # Local filesystem provider
|
||||
│ │ │ ├── sandbox.py # Abstract Sandbox interface
|
||||
│ │ │ ├── tools.py # bash, ls, read/write/str_replace
|
||||
│ │ │ └── middleware.py # Sandbox lifecycle management
|
||||
│ │ ├── subagents/ # Subagent delegation system
|
||||
│ │ │ ├── builtins/ # general-purpose, bash agents
|
||||
│ │ │ ├── executor.py # Background execution engine
|
||||
│ │ │ └── registry.py # Agent registry
|
||||
│ │ ├── tools/builtins/ # Built-in tools (present_files, ask_clarification, view_image)
|
||||
│ │ ├── mcp/ # MCP integration (tools, cache, client)
|
||||
│ │ ├── models/ # Model factory with thinking/vision support
|
||||
│ │ ├── skills/ # Skills discovery, loading, parsing
|
||||
│ │ ├── config/ # Configuration system (app, model, sandbox, tool, etc.)
|
||||
│ │ ├── community/ # Community tools (tavily, jina_ai, firecrawl, image_search, aio_sandbox)
|
||||
│ │ ├── reflection/ # Dynamic module loading (resolve_variable, resolve_class)
|
||||
│ │ └── utils/ # Utilities (network, readability)
|
||||
│ ├── tests/ # Test suite
|
||||
│ └── docs/ # Documentation
|
||||
├── frontend/ # Next.js frontend application
|
||||
│ ├── src/
|
||||
│ │ ├── app/ # Next.js App Router
|
||||
│ │ ├── components/ # React components
|
||||
│ │ ├── core/ # Business logic
|
||||
│ │ ├── hooks/ # Shared React hooks
|
||||
│ │ ├── lib/ # Utilities
|
||||
│ │ ├── server/ # Server-side code
|
||||
│ │ └── styles/ # Global CSS with Tailwind v4
|
||||
│ └── package.json # Next.js 16, React 19, TypeScript 5.8, pnpm
|
||||
└── skills/ # Agent skills directory
|
||||
├── public/ # Public skills (committed)
|
||||
└── custom/ # Custom skills (gitignored)
|
||||
```
|
||||
|
|
@ -41,6 +41,8 @@ Docker provides a consistent, isolated environment with all dependencies pre-con
|
|||
```bash
|
||||
make docker-start
|
||||
```
|
||||
`make docker-start` reads `config.yaml` and starts `provisioner` only for provisioner/Kubernetes sandbox mode.
|
||||
|
||||
All services will start with hot-reload enabled:
|
||||
- Frontend changes are automatically reloaded
|
||||
- Backend changes trigger automatic restart
|
||||
|
|
@ -56,7 +58,7 @@ Docker provides a consistent, isolated environment with all dependencies pre-con
|
|||
```bash
|
||||
# Build the custom k3s image (with pre-cached sandbox image)
|
||||
make docker-init
|
||||
# Start all services in Docker (localhost:2026)
|
||||
# Start Docker services (mode-aware, localhost:2026)
|
||||
make docker-start
|
||||
# Stop Docker development services
|
||||
make docker-stop
|
||||
|
|
@ -77,7 +79,8 @@ Docker Compose (deer-flow-dev)
|
|||
├→ nginx (port 2026) ← Reverse proxy
|
||||
├→ web (port 3000) ← Frontend with hot-reload
|
||||
├→ api (port 8001) ← Gateway API with hot-reload
|
||||
└→ langgraph (port 2024) ← LangGraph server with hot-reload
|
||||
├→ langgraph (port 2024) ← LangGraph server with hot-reload
|
||||
└→ provisioner (optional, port 8002) ← Started only in provisioner/K8s sandbox mode
|
||||
```
|
||||
|
||||
**Benefits of Docker Development**:
|
||||
|
|
@ -238,6 +241,13 @@ cd frontend
|
|||
pnpm test
|
||||
```
|
||||
|
||||
### PR Regression Checks
|
||||
|
||||
Every pull request runs the backend regression workflow at [.github/workflows/backend-unit-tests.yml](.github/workflows/backend-unit-tests.yml), including:
|
||||
|
||||
- `tests/test_provisioner_kubeconfig.py`
|
||||
- `tests/test_docker_sandbox_mode_detection.py`
|
||||
|
||||
## Code Style
|
||||
|
||||
- **Backend (Python)**: We use `ruff` for linting and formatting
|
||||
|
|
|
|||
2
Makefile
2
Makefile
|
|
@ -13,7 +13,7 @@ help:
|
|||
@echo ""
|
||||
@echo "Docker Development Commands:"
|
||||
@echo " make docker-init - Build the custom k3s image (with pre-cached sandbox image)"
|
||||
@echo " make docker-start - Start all services in Docker (localhost:2026)"
|
||||
@echo " make docker-start - Start Docker services (mode-aware from config.yaml, localhost:2026)"
|
||||
@echo " make docker-stop - Stop Docker development services"
|
||||
@echo " make docker-logs - View Docker development logs"
|
||||
@echo " make docker-logs-frontend - View Docker frontend logs"
|
||||
|
|
|
|||
|
|
@ -105,9 +105,11 @@ The fastest way to get started with a consistent environment:
|
|||
1. **Initialize and start**:
|
||||
```bash
|
||||
make docker-init # Pull sandbox image (Only once or when image updates)
|
||||
make docker-start # Start all services and watch for code changes
|
||||
make docker-start # Start services (auto-detects sandbox mode from config.yaml)
|
||||
```
|
||||
|
||||
`make docker-start` now starts `provisioner` only when `config.yaml` uses provisioner mode (`sandbox.use: src.community.aio_sandbox:AioSandboxProvider` with `provisioner_url`).
|
||||
|
||||
2. **Access**: http://localhost:2026
|
||||
|
||||
See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed Docker development guide.
|
||||
|
|
@ -142,6 +144,8 @@ DeerFlow supports multiple sandbox execution modes:
|
|||
- **Docker Execution** (runs sandbox code in isolated Docker containers)
|
||||
- **Docker Execution with Kubernetes** (runs sandbox code in Kubernetes pods via provisioner service)
|
||||
|
||||
For Docker development, service startup follows `config.yaml` sandbox mode. In Local/Docker modes, `provisioner` is not started.
|
||||
|
||||
See the [Sandbox Configuration Guide](backend/docs/CONFIGURATION.md#sandbox) to configure your preferred mode.
|
||||
|
||||
#### MCP Server
|
||||
|
|
@ -242,6 +246,8 @@ DeerFlow is model-agnostic — it works with any LLM that implements the OpenAI-
|
|||
|
||||
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for development setup, workflow, and guidelines.
|
||||
|
||||
Regression coverage includes Docker sandbox mode detection and provisioner kubeconfig-path handling tests in `backend/tests/`.
|
||||
|
||||
## License
|
||||
|
||||
This project is open source and available under the [MIT License](./LICENSE).
|
||||
|
|
|
|||
|
|
@ -11,6 +11,7 @@ DeerFlow is a LangGraph-based AI super agent system with a full-stack architectu
|
|||
- **Gateway API** (port 8001): REST API for models, MCP, skills, memory, artifacts, and uploads
|
||||
- **Frontend** (port 3000): Next.js web interface
|
||||
- **Nginx** (port 2026): Unified reverse proxy entry point
|
||||
- **Provisioner** (port 8002, optional in Docker dev): Started only when sandbox is configured for provisioner/Kubernetes mode
|
||||
|
||||
**Project Structure**:
|
||||
```
|
||||
|
|
@ -83,8 +84,15 @@ make dev # Run LangGraph server only (port 2024)
|
|||
make gateway # Run Gateway API only (port 8001)
|
||||
make lint # Lint with ruff
|
||||
make format # Format code with ruff
|
||||
uv run pytest # Run backend tests
|
||||
```
|
||||
|
||||
Regression tests related to Docker/provisioner behavior:
|
||||
- `tests/test_docker_sandbox_mode_detection.py` (mode detection from `config.yaml`)
|
||||
- `tests/test_provisioner_kubeconfig.py` (kubeconfig file/directory handling)
|
||||
|
||||
CI runs these regression tests for every pull request via [.github/workflows/backend-unit-tests.yml](../.github/workflows/backend-unit-tests.yml).
|
||||
|
||||
## Architecture
|
||||
|
||||
### Agent System
|
||||
|
|
|
|||
|
|
@ -5,10 +5,22 @@ FROM python:3.12-slim
|
|||
RUN apt-get update && apt-get install -y \
|
||||
curl \
|
||||
build-essential \
|
||||
docker.io \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install uv
|
||||
RUN curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
# Use IP address for proxy (cai.local may not resolve in Docker container)
|
||||
ENV http_proxy=http://192.168.1.250:7897 https_proxy=http://192.168.1.250:7897
|
||||
# Exclude localhost and container network from proxy
|
||||
ENV no_proxy=localhost,127.0.0.1,0.0.0.0,frontend,gateway,langgraph,nginx,.local
|
||||
# RUN curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
# Try to install uv via official installer, fallback to pip if fails
|
||||
RUN (curl -LsSf https://astral.sh/uv/install.sh | sh || pip install uv) && \
|
||||
echo "uv installed at:" && \
|
||||
which uv || echo "uv not found in PATH" && \
|
||||
ls -la /root/.local/bin/ || echo "/root/.local/bin/ not found" && \
|
||||
echo "uv version:" && \
|
||||
uv --version || echo "uv command not working"
|
||||
ENV PATH="/root/.local/bin:$PATH"
|
||||
|
||||
# Set working directory
|
||||
|
|
@ -21,6 +33,9 @@ COPY backend ./backend
|
|||
RUN --mount=type=cache,target=/root/.cache/uv \
|
||||
sh -c "cd backend && uv sync"
|
||||
|
||||
# Keep proxy for uv sync (Python package downloads may need proxy in China)
|
||||
ENV http_proxy= https_proxy=
|
||||
|
||||
# Expose ports (gateway: 8001, langgraph: 2024)
|
||||
EXPOSE 8001 2024
|
||||
|
||||
|
|
|
|||
|
|
@ -98,6 +98,8 @@ sandbox:
|
|||
provisioner_url: http://provisioner:8002
|
||||
```
|
||||
|
||||
When using Docker development (`make docker-start`), DeerFlow starts the `provisioner` service only if this provisioner mode is configured. In local or plain Docker sandbox modes, `provisioner` is skipped.
|
||||
|
||||
See [Provisioner Setup Guide](docker/provisioner/README.md) for detailed configuration, prerequisites, and troubleshooting.
|
||||
|
||||
Choose between local execution or Docker-based isolation:
|
||||
|
|
|
|||
|
|
@ -245,6 +245,17 @@ def make_lead_agent(config: RunnableConfig):
|
|||
subagent_enabled = config.get("configurable", {}).get("subagent_enabled", False)
|
||||
max_concurrent_subagents = config.get("configurable", {}).get("max_concurrent_subagents", 3)
|
||||
print(f"thinking_enabled: {thinking_enabled}, model_name: {model_name}, is_plan_mode: {is_plan_mode}, subagent_enabled: {subagent_enabled}, max_concurrent_subagents: {max_concurrent_subagents}")
|
||||
|
||||
# Inject run metadata for LangSmith trace tagging
|
||||
if "metadata" not in config:
|
||||
config["metadata"] = {}
|
||||
config["metadata"].update({
|
||||
"model_name": model_name or "default",
|
||||
"thinking_enabled": thinking_enabled,
|
||||
"is_plan_mode": is_plan_mode,
|
||||
"subagent_enabled": subagent_enabled,
|
||||
})
|
||||
|
||||
return create_agent(
|
||||
model=create_chat_model(name=model_name, thinking_enabled=thinking_enabled),
|
||||
tools=get_available_tools(model_name=model_name, subagent_enabled=subagent_enabled),
|
||||
|
|
|
|||
|
|
@ -315,7 +315,7 @@ def get_skills_prompt_section() -> str:
|
|||
Returns the <skill_system>...</skill_system> block listing all enabled skills,
|
||||
suitable for injection into any agent's system prompt.
|
||||
"""
|
||||
skills = load_skills(enabled_only=True)
|
||||
skills = load_skills(enabled_only=False) # Load all skills, we'll indicate enabled status in the prompt
|
||||
|
||||
try:
|
||||
from src.config import get_app_config
|
||||
|
|
|
|||
|
|
@ -16,6 +16,7 @@ import atexit
|
|||
import hashlib
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import signal
|
||||
import threading
|
||||
import time
|
||||
|
|
@ -197,28 +198,53 @@ class AioSandboxProvider(SandboxProvider):
|
|||
|
||||
return mounts
|
||||
|
||||
@staticmethod
|
||||
def _get_thread_mounts(thread_id: str) -> list[tuple[str, str, bool]]:
|
||||
"""Get volume mounts for a thread's data directories.
|
||||
@classmethod
|
||||
def _ensure_thread_mount_dirs(cls, thread_id: str) -> list[tuple[str, str, bool]]:
|
||||
"""Ensure thread data mount directories exist and are writable."""
|
||||
base_dir = Path(os.getcwd())
|
||||
thread_dir = base_dir / THREAD_DATA_BASE_DIR / thread_id / "user-data"
|
||||
host_thread_dir = cls._resolve_host_bind_path(thread_dir)
|
||||
|
||||
Creates directories if they don't exist (lazy initialization).
|
||||
"""
|
||||
base_dir = os.getcwd()
|
||||
thread_dir = Path(base_dir) / THREAD_DATA_BASE_DIR / thread_id / "user-data"
|
||||
if str(host_thread_dir) != str(thread_dir):
|
||||
logger.info(
|
||||
"Resolved thread mount source from %s to host path %s",
|
||||
thread_dir,
|
||||
host_thread_dir,
|
||||
)
|
||||
|
||||
# Ensure the root user-data directory exists and is writable for
|
||||
# sandbox runtimes that run as non-root users.
|
||||
os.makedirs(host_thread_dir, exist_ok=True)
|
||||
try:
|
||||
os.chmod(host_thread_dir, 0o777)
|
||||
except OSError as e:
|
||||
logger.warning(f"Could not chmod thread user-data dir {host_thread_dir}: {e}")
|
||||
|
||||
mounts = [
|
||||
(str(thread_dir / "workspace"), f"{VIRTUAL_PATH_PREFIX}/workspace", False),
|
||||
(str(thread_dir / "uploads"), f"{VIRTUAL_PATH_PREFIX}/uploads", False),
|
||||
(str(thread_dir / "outputs"), f"{VIRTUAL_PATH_PREFIX}/outputs", False),
|
||||
(str(host_thread_dir / "workspace"), f"{VIRTUAL_PATH_PREFIX}/workspace", False),
|
||||
(str(host_thread_dir / "uploads"), f"{VIRTUAL_PATH_PREFIX}/uploads", False),
|
||||
(str(host_thread_dir / "outputs"), f"{VIRTUAL_PATH_PREFIX}/outputs", False),
|
||||
]
|
||||
|
||||
for host_path, _, _ in mounts:
|
||||
os.makedirs(host_path, exist_ok=True)
|
||||
try:
|
||||
os.chmod(host_path, 0o777)
|
||||
except OSError as e:
|
||||
logger.warning(f"Could not chmod thread mount dir {host_path}: {e}")
|
||||
|
||||
return mounts
|
||||
|
||||
@staticmethod
|
||||
def _get_skills_mount() -> tuple[str, str, bool] | None:
|
||||
@classmethod
|
||||
def _get_thread_mounts(cls, thread_id: str) -> list[tuple[str, str, bool]]:
|
||||
"""Get volume mounts for a thread's data directories.
|
||||
|
||||
Creates directories if they don't exist (lazy initialization).
|
||||
"""
|
||||
return cls._ensure_thread_mount_dirs(thread_id)
|
||||
|
||||
@classmethod
|
||||
def _get_skills_mount(cls) -> tuple[str, str, bool] | None:
|
||||
"""Get the skills directory mount configuration."""
|
||||
try:
|
||||
config = get_app_config()
|
||||
|
|
@ -226,11 +252,73 @@ class AioSandboxProvider(SandboxProvider):
|
|||
container_path = config.skills.container_path
|
||||
|
||||
if skills_path.exists():
|
||||
return (str(skills_path), container_path, True) # Read-only for security
|
||||
host_skills_path = cls._resolve_host_bind_path(skills_path)
|
||||
if str(host_skills_path) != str(skills_path):
|
||||
logger.info(
|
||||
"Resolved skills bind source from %s to host path %s",
|
||||
skills_path,
|
||||
host_skills_path,
|
||||
)
|
||||
return (str(host_skills_path), container_path, True) # Read-only for security
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not setup skills mount: {e}")
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _decode_mountinfo_path(path: str) -> str:
|
||||
"""Decode escaped mountinfo paths (e.g. ``\040`` -> space)."""
|
||||
|
||||
return re.sub(r"\\([0-7]{3})", lambda m: chr(int(m.group(1), 8)), path)
|
||||
|
||||
@classmethod
|
||||
def _resolve_host_bind_path(cls, path: Path) -> Path:
|
||||
"""Resolve a container-visible bind path to its host source path.
|
||||
|
||||
This is needed when running gateway/langgraph inside Docker while using
|
||||
the host Docker socket to start sandbox containers. In that scenario,
|
||||
bind sources passed to Docker must be host paths, not paths inside the
|
||||
current container.
|
||||
|
||||
If resolution fails, returns the original path.
|
||||
"""
|
||||
|
||||
try:
|
||||
target = str(path.resolve())
|
||||
except Exception:
|
||||
target = str(path)
|
||||
|
||||
try:
|
||||
with open("/proc/self/mountinfo") as f:
|
||||
lines = f.readlines()
|
||||
except Exception:
|
||||
return path
|
||||
|
||||
best_mount_point: str | None = None
|
||||
best_root: str | None = None
|
||||
|
||||
for line in lines:
|
||||
pre, _, _ = line.partition(" - ")
|
||||
fields = pre.split()
|
||||
if len(fields) < 5:
|
||||
continue
|
||||
|
||||
# Fields: ... root mount_point ...
|
||||
root = cls._decode_mountinfo_path(fields[3])
|
||||
mount_point = cls._decode_mountinfo_path(fields[4])
|
||||
|
||||
if target == mount_point or target.startswith(f"{mount_point.rstrip('/')}/"):
|
||||
if best_mount_point is None or len(mount_point) > len(best_mount_point):
|
||||
best_mount_point = mount_point
|
||||
best_root = root
|
||||
|
||||
if best_mount_point is None or best_root is None:
|
||||
return path
|
||||
|
||||
rel = target[len(best_mount_point) :].lstrip("/")
|
||||
if rel:
|
||||
return Path(best_root) / rel
|
||||
return Path(best_root)
|
||||
|
||||
# ── Idle timeout management ──────────────────────────────────────────
|
||||
|
||||
def _start_idle_checker(self) -> None:
|
||||
|
|
@ -331,6 +419,11 @@ class AioSandboxProvider(SandboxProvider):
|
|||
Layer 2: Cross-process state store + file lock (covers multi-process)
|
||||
Layer 3: Backend discovery (covers containers started by other processes)
|
||||
"""
|
||||
if thread_id:
|
||||
# Best-effort self-heal for existing threads/sandboxes: make sure
|
||||
# mounted directories are writable by non-root users inside sandbox.
|
||||
self._ensure_thread_mount_dirs(thread_id)
|
||||
|
||||
# ── Layer 1: In-process cache (fast path) ──
|
||||
if thread_id:
|
||||
with self._lock:
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@ from __future__ import annotations
|
|||
|
||||
import logging
|
||||
import subprocess
|
||||
import time
|
||||
|
||||
from src.utils.network import get_free_port, release_port
|
||||
|
||||
|
|
@ -107,17 +108,55 @@ class LocalContainerBackend(SandboxBackend):
|
|||
port = get_free_port(start_port=self._base_port)
|
||||
try:
|
||||
container_id = self._start_container(container_name, port, extra_mounts)
|
||||
self._ensure_user_data_permissions(container_name)
|
||||
except Exception:
|
||||
release_port(port)
|
||||
raise
|
||||
|
||||
# Use host.docker.internal when running inside Docker container
|
||||
# This allows containers to access the sandbox on the host
|
||||
sandbox_host = "host.docker.internal"
|
||||
return SandboxInfo(
|
||||
sandbox_id=sandbox_id,
|
||||
sandbox_url=f"http://localhost:{port}",
|
||||
sandbox_url=f"http://{sandbox_host}:{port}",
|
||||
container_name=container_name,
|
||||
container_id=container_id,
|
||||
)
|
||||
|
||||
def _ensure_user_data_permissions(self, container_name: str) -> None:
|
||||
"""Ensure /mnt/user-data subdirectories are writable in sandbox container.
|
||||
|
||||
Some sandbox services run as non-root users (e.g. ``gem``). If mounted
|
||||
host directories are created as ``755 root:root``, uploads may fail with
|
||||
permission denied. This best-effort fix normalizes permissions.
|
||||
"""
|
||||
|
||||
fix_cmd = (
|
||||
"mkdir -p /mnt/user-data/uploads /mnt/user-data/workspace /mnt/user-data/outputs "
|
||||
"&& chmod 777 /mnt/user-data/uploads /mnt/user-data/workspace /mnt/user-data/outputs"
|
||||
)
|
||||
|
||||
# Retry briefly because the init process may still be setting up paths
|
||||
# right after container startup.
|
||||
for _ in range(5):
|
||||
try:
|
||||
subprocess.run(
|
||||
[self._runtime, "exec", container_name, "sh", "-lc", fix_cmd],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=True,
|
||||
timeout=5,
|
||||
)
|
||||
return
|
||||
except (subprocess.CalledProcessError, subprocess.TimeoutExpired) as e:
|
||||
logger.debug(f"Retrying user-data permission fix for {container_name}: {e}")
|
||||
time.sleep(0.3)
|
||||
|
||||
logger.warning(
|
||||
"Failed to ensure user-data permissions for %s; uploads may fail until permissions are fixed",
|
||||
container_name,
|
||||
)
|
||||
|
||||
def destroy(self, info: SandboxInfo) -> None:
|
||||
"""Stop the container and release its port."""
|
||||
if info.container_id:
|
||||
|
|
@ -159,7 +198,9 @@ class LocalContainerBackend(SandboxBackend):
|
|||
if port is None:
|
||||
return None
|
||||
|
||||
sandbox_url = f"http://localhost:{port}"
|
||||
# Use host.docker.internal when running inside Docker container
|
||||
sandbox_host = "host.docker.internal"
|
||||
sandbox_url = f"http://{sandbox_host}:{port}"
|
||||
if not wait_for_sandbox_ready(sandbox_url, timeout=5):
|
||||
return None
|
||||
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ from .app_config import get_app_config
|
|||
from .extensions_config import ExtensionsConfig, get_extensions_config
|
||||
from .memory_config import MemoryConfig, get_memory_config
|
||||
from .skills_config import SkillsConfig
|
||||
from .tracing_config import get_tracing_config, is_tracing_enabled
|
||||
|
||||
__all__ = [
|
||||
"get_app_config",
|
||||
|
|
@ -10,4 +11,6 @@ __all__ = [
|
|||
"get_extensions_config",
|
||||
"MemoryConfig",
|
||||
"get_memory_config",
|
||||
"get_tracing_config",
|
||||
"is_tracing_enabled",
|
||||
]
|
||||
|
|
|
|||
|
|
@ -161,8 +161,8 @@ class ExtensionsConfig(BaseModel):
|
|||
"""
|
||||
skill_config = self.skills.get(skill_name)
|
||||
if skill_config is None:
|
||||
# Default to enable for public & custom skill
|
||||
return skill_category in ("public", "custom")
|
||||
# Default to enable for public/custom/uploads skills
|
||||
return skill_category in ("public", "custom", "uploads")
|
||||
return skill_config.enabled
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,51 @@
|
|||
import logging
|
||||
import os
|
||||
from pydantic import BaseModel, Field
|
||||
import threading
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
_config_lock = threading.Lock()
|
||||
|
||||
class TracingConfig(BaseModel):
|
||||
"""Configuration for LangSmith tracing."""
|
||||
|
||||
enabled: bool = Field(...)
|
||||
api_key: str | None = Field(...)
|
||||
project: str = Field(...)
|
||||
endpoint: str = Field(...)
|
||||
|
||||
@property
|
||||
def is_configured(self) -> bool:
|
||||
"""Check if tracing is fully configured (enabled and has API key)."""
|
||||
return self.enabled and bool(self.api_key)
|
||||
|
||||
|
||||
_tracing_config: TracingConfig | None = None
|
||||
|
||||
|
||||
def get_tracing_config() -> TracingConfig:
|
||||
"""Get the current tracing configuration from environment variables.
|
||||
Returns:
|
||||
TracingConfig with current settings.
|
||||
"""
|
||||
global _tracing_config
|
||||
if _tracing_config is not None:
|
||||
return _tracing_config
|
||||
with _config_lock:
|
||||
if _tracing_config is not None: # Double-check after acquiring lock
|
||||
return _tracing_config
|
||||
_tracing_config = TracingConfig(
|
||||
enabled=os.environ.get("LANGSMITH_TRACING", "").lower() == "true",
|
||||
api_key=os.environ.get("LANGSMITH_API_KEY"),
|
||||
project=os.environ.get("LANGSMITH_PROJECT", "deer-flow"),
|
||||
endpoint=os.environ.get("LANGSMITH_ENDPOINT", "https://api.smith.langchain.com"),
|
||||
)
|
||||
return _tracing_config
|
||||
|
||||
def is_tracing_enabled() -> bool:
|
||||
"""Check if LangSmith tracing is enabled and configured.
|
||||
Returns:
|
||||
True if tracing is enabled and has an API key.
|
||||
"""
|
||||
return get_tracing_config().is_configured
|
||||
|
||||
|
|
@ -9,6 +9,10 @@ class GatewayConfig(BaseModel):
|
|||
host: str = Field(default="0.0.0.0", description="Host to bind the gateway server")
|
||||
port: int = Field(default=8001, description="Port to bind the gateway server")
|
||||
cors_origins: list[str] = Field(default_factory=lambda: ["http://localhost:3000"], description="Allowed CORS origins")
|
||||
skill_content_api_url: str = Field(
|
||||
default="https://skills.xueai.art/api/cmsContent/getContent",
|
||||
description="Remote API URL used to fetch skill YAML content by contentId",
|
||||
)
|
||||
|
||||
|
||||
_gateway_config: GatewayConfig | None = None
|
||||
|
|
@ -23,5 +27,9 @@ def get_gateway_config() -> GatewayConfig:
|
|||
host=os.getenv("GATEWAY_HOST", "0.0.0.0"),
|
||||
port=int(os.getenv("GATEWAY_PORT", "8001")),
|
||||
cors_origins=cors_origins_str.split(","),
|
||||
skill_content_api_url=os.getenv(
|
||||
"SKILL_CONTENT_API_URL",
|
||||
"https://skills.xueai.art/api/cmsContent/getContent",
|
||||
),
|
||||
)
|
||||
return _gateway_config
|
||||
|
|
|
|||
|
|
@ -6,12 +6,16 @@ import tempfile
|
|||
import zipfile
|
||||
from pathlib import Path
|
||||
|
||||
import httpx
|
||||
import yaml
|
||||
from fastapi import APIRouter, HTTPException
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from src.config.extensions_config import ExtensionsConfig, SkillStateConfig, get_extensions_config, reload_extensions_config
|
||||
from src.gateway.config import get_gateway_config
|
||||
from src.gateway.path_utils import resolve_thread_virtual_path
|
||||
from src.gateway.skill_yaml_importer import materialize_skill_tree, parse_skill_yaml_spec
|
||||
from src.sandbox.sandbox_provider import get_sandbox_provider
|
||||
from src.skills import Skill, load_skills
|
||||
from src.skills.loader import get_skills_root_path
|
||||
|
||||
|
|
@ -56,6 +60,58 @@ class SkillInstallResponse(BaseModel):
|
|||
message: str = Field(..., description="Installation result message")
|
||||
|
||||
|
||||
class SkillYamlMaterializeRequest(BaseModel):
|
||||
"""Request model for creating a skill directory tree from YAML."""
|
||||
|
||||
thread_id: str = Field(..., description="Thread ID where target virtual path is resolved")
|
||||
path: str = Field(..., description="Virtual path to YAML file, e.g. /mnt/user-data/uploads/skill-package.yaml")
|
||||
target_dir: str = Field(
|
||||
default="/mnt/user-data/uploads/skill",
|
||||
description="Virtual target directory where files/directories will be created",
|
||||
)
|
||||
clear_target: bool = Field(
|
||||
default=True,
|
||||
description="Whether to clear target directory before creating parsed structure",
|
||||
)
|
||||
|
||||
|
||||
class SkillYamlMaterializeResponse(BaseModel):
|
||||
"""Response model for YAML skill materialization."""
|
||||
|
||||
success: bool = Field(..., description="Whether the operation succeeded")
|
||||
target_dir: str = Field(..., description="Virtual target directory")
|
||||
created_directories: int = Field(..., description="Number of created directories")
|
||||
created_files: int = Field(..., description="Number of created files")
|
||||
message: str = Field(..., description="Operation result message")
|
||||
|
||||
|
||||
class RemoteSkillBootstrapRequest(BaseModel):
|
||||
"""Request model for bootstrapping skill files from remote content API."""
|
||||
|
||||
thread_id: str = Field(..., description="Thread ID used for sandbox and user-data path binding")
|
||||
content_id: int = Field(..., description="Remote content ID (maps from frontend query param skill_id)")
|
||||
language_type: int = Field(default=0, description="Language type for remote API request body")
|
||||
target_dir: str = Field(
|
||||
default="/mnt/user-data/uploads/skill",
|
||||
description="Virtual target directory where parsed files/directories are created",
|
||||
)
|
||||
clear_target: bool = Field(
|
||||
default=True,
|
||||
description="Whether to clear target directory before writing parsed files",
|
||||
)
|
||||
|
||||
|
||||
class RemoteSkillBootstrapResponse(BaseModel):
|
||||
"""Response model for remote bootstrap endpoint."""
|
||||
|
||||
success: bool = Field(..., description="Whether bootstrap succeeded")
|
||||
target_dir: str = Field(..., description="Virtual target directory")
|
||||
created_directories: int = Field(..., description="Number of created directories")
|
||||
created_files: int = Field(..., description="Number of created files")
|
||||
sandbox_id: str = Field(..., description="Acquired sandbox ID")
|
||||
message: str = Field(..., description="Operation result message")
|
||||
|
||||
|
||||
# Allowed properties in SKILL.md frontmatter
|
||||
ALLOWED_FRONTMATTER_PROPERTIES = {"name", "description", "license", "allowed-tools", "metadata"}
|
||||
|
||||
|
|
@ -440,3 +496,144 @@ async def install_skill(request: SkillInstallRequest) -> SkillInstallResponse:
|
|||
except Exception as e:
|
||||
logger.error(f"Failed to install skill: {e}", exc_info=True)
|
||||
raise HTTPException(status_code=500, detail=f"Failed to install skill: {str(e)}")
|
||||
|
||||
|
||||
@router.post(
|
||||
"/skills/materialize-yaml",
|
||||
response_model=SkillYamlMaterializeResponse,
|
||||
summary="Materialize Skill YAML",
|
||||
description=(
|
||||
"Parse a YAML file that describes files/directories and create the described "
|
||||
"structure under a target virtual directory (default: /mnt/user-data/uploads/skill)."
|
||||
),
|
||||
)
|
||||
async def materialize_skill_yaml(request: SkillYamlMaterializeRequest) -> SkillYamlMaterializeResponse:
|
||||
"""Create skill package files from a YAML specification.
|
||||
|
||||
Supported YAML formats include:
|
||||
- entries (tree objects)
|
||||
- files + directories
|
||||
- tree/structure nested maps
|
||||
"""
|
||||
try:
|
||||
yaml_path = resolve_thread_virtual_path(request.thread_id, request.path)
|
||||
if not yaml_path.exists() or not yaml_path.is_file():
|
||||
raise HTTPException(status_code=404, detail=f"YAML file not found: {request.path}")
|
||||
|
||||
target_path = resolve_thread_virtual_path(request.thread_id, request.target_dir)
|
||||
yaml_text = yaml_path.read_text(encoding="utf-8")
|
||||
|
||||
parsed = parse_skill_yaml_spec(yaml_text)
|
||||
materialize_skill_tree(parsed, target_path, clear_target=request.clear_target)
|
||||
|
||||
logger.info(
|
||||
"Materialized skill YAML for thread %s: source=%s target=%s dirs=%d files=%d",
|
||||
request.thread_id,
|
||||
request.path,
|
||||
request.target_dir,
|
||||
len(parsed.directories),
|
||||
len(parsed.files),
|
||||
)
|
||||
|
||||
return SkillYamlMaterializeResponse(
|
||||
success=True,
|
||||
target_dir=request.target_dir,
|
||||
created_directories=len(parsed.directories),
|
||||
created_files=len(parsed.files),
|
||||
message=(
|
||||
f"Created {len(parsed.files)} files and {len(parsed.directories)} directories "
|
||||
f"under '{request.target_dir}'"
|
||||
),
|
||||
)
|
||||
except HTTPException:
|
||||
raise
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to materialize skill YAML: {e}", exc_info=True)
|
||||
raise HTTPException(status_code=500, detail=f"Failed to materialize skill YAML: {str(e)}")
|
||||
|
||||
|
||||
@router.post(
|
||||
"/skills/bootstrap-remote",
|
||||
response_model=RemoteSkillBootstrapResponse,
|
||||
summary="Bootstrap Skill Files From Remote API",
|
||||
description=(
|
||||
"Fetch YAML text from configured remote API by content_id/language_type, "
|
||||
"acquire sandbox for the thread, and materialize files into "
|
||||
"/mnt/user-data/uploads/skill before first thread submit."
|
||||
),
|
||||
)
|
||||
async def bootstrap_skill_from_remote(request: RemoteSkillBootstrapRequest) -> RemoteSkillBootstrapResponse:
|
||||
"""Initialize thread skill directory from remote YAML content service."""
|
||||
try:
|
||||
# 1) Ensure sandbox and thread personal dirs are initialized first.
|
||||
sandbox_provider = get_sandbox_provider()
|
||||
sandbox_id = sandbox_provider.acquire(request.thread_id)
|
||||
|
||||
# 2) Fetch YAML content from configured remote endpoint.
|
||||
cfg = get_gateway_config()
|
||||
api_url = cfg.skill_content_api_url
|
||||
payload = {
|
||||
"contentId": request.content_id,
|
||||
"languageType": request.language_type,
|
||||
}
|
||||
|
||||
async with httpx.AsyncClient(timeout=20.0) as client:
|
||||
response = await client.post(api_url, json=payload)
|
||||
|
||||
if response.status_code >= 400:
|
||||
raise HTTPException(
|
||||
status_code=502,
|
||||
detail=f"Remote skill content API failed with HTTP {response.status_code}",
|
||||
)
|
||||
|
||||
try:
|
||||
response_json = response.json()
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=502, detail=f"Remote API did not return valid JSON: {e}") from e
|
||||
|
||||
status = response_json.get("status")
|
||||
if status != 1000:
|
||||
raise HTTPException(
|
||||
status_code=502,
|
||||
detail=f"Remote API returned non-success status: {status}, message: {response_json.get('message')}",
|
||||
)
|
||||
|
||||
yaml_text = response_json.get("data")
|
||||
if not isinstance(yaml_text, str) or not yaml_text.strip():
|
||||
raise HTTPException(status_code=502, detail="Remote API returned empty or invalid YAML content")
|
||||
|
||||
# 3) Parse and write into thread uploads/skill.
|
||||
target_path = resolve_thread_virtual_path(request.thread_id, request.target_dir)
|
||||
parsed = parse_skill_yaml_spec(yaml_text)
|
||||
materialize_skill_tree(parsed, target_path, clear_target=request.clear_target)
|
||||
|
||||
logger.info(
|
||||
"Bootstrapped remote skill YAML for thread %s (content_id=%s, language_type=%s) to %s: dirs=%d files=%d",
|
||||
request.thread_id,
|
||||
request.content_id,
|
||||
request.language_type,
|
||||
request.target_dir,
|
||||
len(parsed.directories),
|
||||
len(parsed.files),
|
||||
)
|
||||
|
||||
return RemoteSkillBootstrapResponse(
|
||||
success=True,
|
||||
target_dir=request.target_dir,
|
||||
created_directories=len(parsed.directories),
|
||||
created_files=len(parsed.files),
|
||||
sandbox_id=sandbox_id,
|
||||
message=(
|
||||
f"Bootstrapped {len(parsed.files)} files and {len(parsed.directories)} directories "
|
||||
f"under '{request.target_dir}'"
|
||||
),
|
||||
)
|
||||
except HTTPException:
|
||||
raise
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to bootstrap skill from remote API: {e}", exc_info=True)
|
||||
raise HTTPException(status_code=500, detail=f"Failed to bootstrap skill from remote API: {str(e)}")
|
||||
|
|
|
|||
|
|
@ -2,9 +2,10 @@
|
|||
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
|
||||
from fastapi import APIRouter, File, HTTPException, UploadFile
|
||||
from fastapi import APIRouter, File, Form, HTTPException, UploadFile
|
||||
from pydantic import BaseModel
|
||||
|
||||
from src.agents.middlewares.thread_data_middleware import THREAD_DATA_BASE_DIR
|
||||
|
|
@ -14,6 +15,8 @@ logger = logging.getLogger(__name__)
|
|||
|
||||
router = APIRouter(prefix="/api/threads/{thread_id}/uploads", tags=["uploads"])
|
||||
|
||||
SKILL_UPLOAD_TARGET = "skill"
|
||||
|
||||
# File extensions that should be converted to markdown
|
||||
CONVERTIBLE_EXTENSIONS = {
|
||||
".pdf",
|
||||
|
|
@ -78,6 +81,7 @@ async def convert_file_to_markdown(file_path: Path) -> Path | None:
|
|||
async def upload_files(
|
||||
thread_id: str,
|
||||
files: list[UploadFile] = File(...),
|
||||
upload_target: str | None = Form(default=None),
|
||||
) -> UploadResponse:
|
||||
"""Upload multiple files to a thread's uploads directory.
|
||||
|
||||
|
|
@ -95,6 +99,19 @@ async def upload_files(
|
|||
raise HTTPException(status_code=400, detail="No files provided")
|
||||
|
||||
uploads_dir = get_uploads_dir(thread_id)
|
||||
target_dir = uploads_dir
|
||||
relative_upload_prefix = "uploads"
|
||||
|
||||
if upload_target == SKILL_UPLOAD_TARGET:
|
||||
target_dir = uploads_dir / SKILL_UPLOAD_TARGET
|
||||
|
||||
# Clean existing uploads/skill directory to keep only the latest skill package
|
||||
if target_dir.exists():
|
||||
shutil.rmtree(target_dir)
|
||||
target_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
relative_upload_prefix = f"uploads/{SKILL_UPLOAD_TARGET}"
|
||||
|
||||
uploaded_files = []
|
||||
|
||||
sandbox_provider = get_sandbox_provider()
|
||||
|
|
@ -107,12 +124,12 @@ async def upload_files(
|
|||
|
||||
try:
|
||||
# Save the original file
|
||||
file_path = uploads_dir / file.filename
|
||||
file_path = target_dir / file.filename
|
||||
content = await file.read()
|
||||
|
||||
# Build relative path from backend root
|
||||
relative_path = f".deer-flow/threads/{thread_id}/user-data/uploads/{file.filename}"
|
||||
virtual_path = f"/mnt/user-data/uploads/{file.filename}"
|
||||
relative_path = f".deer-flow/threads/{thread_id}/user-data/{relative_upload_prefix}/{file.filename}"
|
||||
virtual_path = f"/mnt/user-data/{relative_upload_prefix}/{file.filename}"
|
||||
sandbox.update_file(virtual_path, content)
|
||||
|
||||
file_info = {
|
||||
|
|
@ -120,7 +137,7 @@ async def upload_files(
|
|||
"size": str(len(content)),
|
||||
"path": relative_path, # Actual filesystem path (relative to backend/)
|
||||
"virtual_path": virtual_path, # Path for Agent in sandbox
|
||||
"artifact_url": f"/api/threads/{thread_id}/artifacts/mnt/user-data/uploads/{file.filename}", # HTTP URL
|
||||
"artifact_url": f"/api/threads/{thread_id}/artifacts/mnt/user-data/{relative_upload_prefix}/{file.filename}", # HTTP URL
|
||||
}
|
||||
|
||||
logger.info(f"Saved file: {file.filename} ({len(content)} bytes) to {relative_path}")
|
||||
|
|
@ -130,11 +147,13 @@ async def upload_files(
|
|||
if file_ext in CONVERTIBLE_EXTENSIONS:
|
||||
md_path = await convert_file_to_markdown(file_path)
|
||||
if md_path:
|
||||
md_relative_path = f".deer-flow/threads/{thread_id}/user-data/uploads/{md_path.name}"
|
||||
md_relative_path = f".deer-flow/threads/{thread_id}/user-data/{relative_upload_prefix}/{md_path.name}"
|
||||
file_info["markdown_file"] = md_path.name
|
||||
file_info["markdown_path"] = md_relative_path
|
||||
file_info["markdown_virtual_path"] = f"/mnt/user-data/uploads/{md_path.name}"
|
||||
file_info["markdown_artifact_url"] = f"/api/threads/{thread_id}/artifacts/mnt/user-data/uploads/{md_path.name}"
|
||||
file_info["markdown_virtual_path"] = f"/mnt/user-data/{relative_upload_prefix}/{md_path.name}"
|
||||
file_info["markdown_artifact_url"] = (
|
||||
f"/api/threads/{thread_id}/artifacts/mnt/user-data/{relative_upload_prefix}/{md_path.name}"
|
||||
)
|
||||
|
||||
uploaded_files.append(file_info)
|
||||
|
||||
|
|
@ -165,17 +184,18 @@ async def list_uploaded_files(thread_id: str) -> dict:
|
|||
return {"files": [], "count": 0}
|
||||
|
||||
files = []
|
||||
for file_path in sorted(uploads_dir.iterdir()):
|
||||
for file_path in sorted(uploads_dir.rglob("*")):
|
||||
if file_path.is_file():
|
||||
stat = file_path.stat()
|
||||
relative_path = f".deer-flow/threads/{thread_id}/user-data/uploads/{file_path.name}"
|
||||
path_in_uploads = file_path.relative_to(uploads_dir).as_posix()
|
||||
relative_path = f".deer-flow/threads/{thread_id}/user-data/uploads/{path_in_uploads}"
|
||||
files.append(
|
||||
{
|
||||
"filename": file_path.name,
|
||||
"filename": path_in_uploads,
|
||||
"size": stat.st_size,
|
||||
"path": relative_path, # Actual filesystem path (relative to backend/)
|
||||
"virtual_path": f"/mnt/user-data/uploads/{file_path.name}", # Path for Agent in sandbox
|
||||
"artifact_url": f"/api/threads/{thread_id}/artifacts/mnt/user-data/uploads/{file_path.name}", # HTTP URL
|
||||
"virtual_path": f"/mnt/user-data/uploads/{path_in_uploads}", # Path for Agent in sandbox
|
||||
"artifact_url": f"/api/threads/{thread_id}/artifacts/mnt/user-data/uploads/{path_in_uploads}", # HTTP URL
|
||||
"extension": file_path.suffix,
|
||||
"modified": stat.st_mtime,
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,331 @@
|
|||
"""Utilities for parsing YAML-defined skill package structures.
|
||||
|
||||
This module supports turning a YAML document describing files/directories into
|
||||
real filesystem content under a thread's virtual path (for example,
|
||||
``/mnt/user-data/uploads/skill``).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
|
||||
import yaml
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class ParsedSkillTree:
|
||||
"""Normalized parsed structure from YAML spec."""
|
||||
|
||||
directories: set[str]
|
||||
files: dict[str, str]
|
||||
|
||||
|
||||
def _pick_first_existing(data: dict, keys: tuple[str, ...]):
|
||||
for key in keys:
|
||||
if key in data:
|
||||
return data[key]
|
||||
return None
|
||||
|
||||
|
||||
def _extract_spec_root(data: dict) -> dict:
|
||||
"""Extract the effective spec root.
|
||||
|
||||
Supports nested wrappers like:
|
||||
- skill: { ... }
|
||||
- package: { ... }
|
||||
- spec: { ... }
|
||||
"""
|
||||
if not isinstance(data, dict):
|
||||
raise ValueError("YAML root must be an object")
|
||||
|
||||
known_keys = {
|
||||
"entries",
|
||||
"files",
|
||||
"directories",
|
||||
"dirs",
|
||||
"tree",
|
||||
"structure",
|
||||
"file_tree",
|
||||
"fileTree",
|
||||
"file_structure",
|
||||
"paths",
|
||||
}
|
||||
if any(k in data for k in known_keys):
|
||||
return data
|
||||
|
||||
wrapper_candidates = ("skill", "package", "spec", "data", "content", "payload")
|
||||
for wrapper in wrapper_candidates:
|
||||
candidate = data.get(wrapper)
|
||||
if isinstance(candidate, dict) and any(k in candidate for k in known_keys):
|
||||
return candidate
|
||||
|
||||
# Fallback: if exactly one nested object exists, try it as spec root.
|
||||
nested_dicts = [v for v in data.values() if isinstance(v, dict)]
|
||||
if len(nested_dicts) == 1:
|
||||
return nested_dicts[0]
|
||||
|
||||
return data
|
||||
|
||||
|
||||
def _normalize_relative_path(path: str) -> str:
|
||||
"""Normalize and validate a relative path.
|
||||
|
||||
Raises:
|
||||
ValueError: If path is unsafe or invalid.
|
||||
"""
|
||||
if not isinstance(path, str):
|
||||
raise ValueError("Path must be a string")
|
||||
|
||||
normalized = path.strip().replace("\\", "/")
|
||||
if normalized in {"/", ".", "./"}:
|
||||
return ""
|
||||
if not normalized:
|
||||
raise ValueError("Path cannot be empty")
|
||||
|
||||
if normalized.startswith("/"):
|
||||
raise ValueError(f"Path must be relative, got absolute path: {path}")
|
||||
|
||||
if ":" in normalized:
|
||||
raise ValueError(f"Path cannot contain ':' (possible drive path): {path}")
|
||||
|
||||
parts = [part for part in normalized.split("/") if part]
|
||||
if not parts:
|
||||
raise ValueError("Path cannot be empty")
|
||||
|
||||
if any(part in {".", ".."} for part in parts):
|
||||
raise ValueError(f"Path traversal is not allowed: {path}")
|
||||
|
||||
return "/".join(parts)
|
||||
|
||||
|
||||
def _add_directory(path: str, directories: set[str]) -> None:
|
||||
normalized = _normalize_relative_path(path)
|
||||
if not normalized:
|
||||
return
|
||||
directories.add(normalized)
|
||||
|
||||
|
||||
def _add_file(path: str, content: str, files: dict[str, str], directories: set[str]) -> None:
|
||||
normalized = _normalize_relative_path(path)
|
||||
if not normalized:
|
||||
raise ValueError("File path cannot be root ('/')")
|
||||
if not isinstance(content, str):
|
||||
raise ValueError(f"File content must be a string for '{normalized}'")
|
||||
|
||||
parent = Path(normalized).parent
|
||||
if str(parent) != ".":
|
||||
directories.add(str(parent).replace("\\", "/"))
|
||||
|
||||
files[normalized] = content
|
||||
|
||||
|
||||
def _walk_tree_dict(tree: dict, base: str, files: dict[str, str], directories: set[str]) -> None:
|
||||
for name, value in tree.items():
|
||||
if not isinstance(name, str):
|
||||
raise ValueError("Tree keys must be strings")
|
||||
|
||||
if name.strip() in {"/", ".", "./"}:
|
||||
if isinstance(value, dict):
|
||||
_walk_tree_dict(value, base, files, directories)
|
||||
continue
|
||||
raise ValueError("Root sentinel '/' can only be used for directory/object nodes")
|
||||
|
||||
node_path = f"{base}/{name}" if base else name
|
||||
|
||||
if isinstance(value, dict):
|
||||
_add_directory(node_path, directories)
|
||||
_walk_tree_dict(value, _normalize_relative_path(node_path), files, directories)
|
||||
elif isinstance(value, str):
|
||||
_add_file(node_path, value, files, directories)
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Unsupported tree node type for '{node_path}': {type(value).__name__}. "
|
||||
"Use object (directory) or string (file content)."
|
||||
)
|
||||
|
||||
|
||||
def _parse_entries_node(
|
||||
node: dict,
|
||||
base: str,
|
||||
files: dict[str, str],
|
||||
directories: set[str],
|
||||
) -> None:
|
||||
raw_path = node.get("path")
|
||||
raw_name = node.get("name")
|
||||
|
||||
if raw_path is None and raw_name is None:
|
||||
raise ValueError("Each entry must have at least one of: 'path' or 'name'")
|
||||
|
||||
if raw_path is not None and not isinstance(raw_path, str):
|
||||
raise ValueError("Entry 'path' must be a string")
|
||||
if raw_name is not None and not isinstance(raw_name, str):
|
||||
raise ValueError("Entry 'name' must be a string")
|
||||
|
||||
# Common schema compatibility:
|
||||
# - `path` is parent directory (e.g. "/")
|
||||
# - `name` is current node name (e.g. "README.md")
|
||||
# Build parent then append name when both are present.
|
||||
parent = base
|
||||
if isinstance(raw_path, str) and raw_path.strip():
|
||||
rp = raw_path.strip()
|
||||
if rp not in {"/", ".", "./"}:
|
||||
parent = _normalize_relative_path(f"{base}/{rp}" if base else rp)
|
||||
|
||||
if isinstance(raw_name, str) and raw_name.strip():
|
||||
if parent:
|
||||
node_path = _normalize_relative_path(f"{parent}/{raw_name.strip()}")
|
||||
else:
|
||||
node_path = _normalize_relative_path(raw_name.strip())
|
||||
else:
|
||||
# Fallback: only path provided
|
||||
if not isinstance(raw_path, str) or not raw_path.strip():
|
||||
raise ValueError("Each entry must have a non-empty 'path' or 'name'")
|
||||
rp = raw_path.strip()
|
||||
if rp in {"/", ".", "./"}:
|
||||
node_path = base
|
||||
else:
|
||||
node_path = _normalize_relative_path(f"{base}/{rp}" if base else rp)
|
||||
|
||||
node_type = node.get("type")
|
||||
content = node.get("content")
|
||||
children = node.get("children")
|
||||
|
||||
inferred_type = "directory" if isinstance(children, list) else "file" if content is not None else None
|
||||
final_type = node_type or inferred_type
|
||||
|
||||
if final_type == "directory":
|
||||
_add_directory(node_path, directories)
|
||||
if children is None:
|
||||
return
|
||||
if not isinstance(children, list):
|
||||
raise ValueError(f"Entry '{node_path}' children must be a list")
|
||||
for child in children:
|
||||
if not isinstance(child, dict):
|
||||
raise ValueError(f"Entry '{node_path}' children must be objects")
|
||||
_parse_entries_node(child, node_path, files, directories)
|
||||
return
|
||||
|
||||
if final_type == "file":
|
||||
if content is None:
|
||||
raise ValueError(f"File entry '{node_path}' is missing 'content'")
|
||||
_add_file(node_path, content, files, directories)
|
||||
return
|
||||
|
||||
raise ValueError(
|
||||
f"Unable to infer entry type for '{node_path}'. Set 'type' to 'file' or 'directory'."
|
||||
)
|
||||
|
||||
|
||||
def parse_skill_yaml_spec(yaml_text: str) -> ParsedSkillTree:
|
||||
"""Parse YAML text into normalized directories and files.
|
||||
|
||||
Supported forms:
|
||||
- entries: [{type,path/content/children}, ...]
|
||||
- files: {"path/to/file": "text"} + optional directories/dirs
|
||||
- tree/structure: nested dict where dict=directory and string=file content
|
||||
"""
|
||||
try:
|
||||
data = yaml.safe_load(yaml_text)
|
||||
except yaml.YAMLError as e:
|
||||
raise ValueError(f"Invalid YAML: {e}") from e
|
||||
|
||||
if data is None:
|
||||
raise ValueError("YAML is empty")
|
||||
if not isinstance(data, dict):
|
||||
raise ValueError("YAML root must be an object")
|
||||
|
||||
data = _extract_spec_root(data)
|
||||
|
||||
directories: set[str] = set()
|
||||
files: dict[str, str] = {}
|
||||
|
||||
# Form 1: explicit entries list
|
||||
entries = _pick_first_existing(data, ("entries", "nodes", "items"))
|
||||
if entries is not None:
|
||||
if not isinstance(entries, list):
|
||||
raise ValueError("'entries' must be a list")
|
||||
for entry in entries:
|
||||
if not isinstance(entry, dict):
|
||||
raise ValueError("Each item in 'entries' must be an object")
|
||||
_parse_entries_node(entry, "", files, directories)
|
||||
|
||||
# Form 2: files + directories
|
||||
file_map = _pick_first_existing(data, ("files", "paths", "file_map", "fileMap", "documents"))
|
||||
if file_map is not None:
|
||||
if isinstance(file_map, dict):
|
||||
for path, content in file_map.items():
|
||||
_add_file(path, content, files, directories)
|
||||
elif isinstance(file_map, list):
|
||||
for item in file_map:
|
||||
if not isinstance(item, dict):
|
||||
raise ValueError("Each item in 'files' list must be an object")
|
||||
path = item.get("path") or item.get("name") or item.get("file")
|
||||
content = item.get("content")
|
||||
if content is None:
|
||||
content = item.get("text")
|
||||
if content is None:
|
||||
content = item.get("body")
|
||||
if path is None or content is None:
|
||||
raise ValueError("Each file item needs 'path' and 'content'")
|
||||
_add_file(path, content, files, directories)
|
||||
else:
|
||||
raise ValueError("'files' must be a map or list")
|
||||
|
||||
directory_list = _pick_first_existing(data, ("directories", "dirs", "folders", "folder_paths"))
|
||||
if directory_list is not None:
|
||||
if not isinstance(directory_list, list):
|
||||
raise ValueError("'directories'/'dirs' must be a list")
|
||||
for path in directory_list:
|
||||
_add_directory(path, directories)
|
||||
|
||||
# Form 3: nested tree
|
||||
tree = _pick_first_existing(data, ("tree", "structure", "file_tree", "fileTree", "file_structure"))
|
||||
if tree is not None:
|
||||
if isinstance(tree, dict):
|
||||
_walk_tree_dict(tree, "", files, directories)
|
||||
elif isinstance(tree, list):
|
||||
for item in tree:
|
||||
if not isinstance(item, dict):
|
||||
raise ValueError("Items in 'tree' list must be objects")
|
||||
_parse_entries_node(item, "", files, directories)
|
||||
else:
|
||||
raise ValueError("'tree'/'structure' must be an object or list")
|
||||
|
||||
# Heuristic fallback: treat root as path->content map when possible.
|
||||
if not files and not directories:
|
||||
candidate_keys = [k for k in data.keys() if isinstance(k, str)]
|
||||
if candidate_keys and all(isinstance(data[k], str) for k in candidate_keys):
|
||||
for path, content in data.items():
|
||||
_add_file(path, content, files, directories)
|
||||
|
||||
if not files and not directories:
|
||||
raise ValueError(
|
||||
"No content found. Provide at least one of: entries, files, directories/dirs, tree/structure"
|
||||
)
|
||||
|
||||
# Ensure parent directories exist for every file
|
||||
for rel_file in files:
|
||||
parent = Path(rel_file).parent
|
||||
if str(parent) != ".":
|
||||
directories.add(str(parent).replace("\\", "/"))
|
||||
|
||||
return ParsedSkillTree(directories=directories, files=files)
|
||||
|
||||
|
||||
def materialize_skill_tree(parsed: ParsedSkillTree, target_root: Path, clear_target: bool = True) -> None:
|
||||
"""Create parsed directories/files under target root."""
|
||||
if clear_target and target_root.exists():
|
||||
import shutil
|
||||
|
||||
shutil.rmtree(target_root)
|
||||
|
||||
target_root.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
for rel_dir in sorted(parsed.directories, key=lambda p: (p.count("/"), p)):
|
||||
(target_root / rel_dir).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
for rel_file, content in parsed.files.items():
|
||||
file_path = target_root / rel_file
|
||||
file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
file_path.write_text(content, encoding="utf-8")
|
||||
|
|
@ -1,8 +1,10 @@
|
|||
import logging
|
||||
from langchain.chat_models import BaseChatModel
|
||||
|
||||
from src.config import get_app_config
|
||||
from src.config import get_app_config, get_tracing_config, is_tracing_enabled
|
||||
from src.reflection import resolve_class
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
def create_chat_model(name: str | None = None, thinking_enabled: bool = False, **kwargs) -> BaseChatModel:
|
||||
"""Create a chat model instance from the config.
|
||||
|
|
@ -37,4 +39,20 @@ def create_chat_model(name: str | None = None, thinking_enabled: bool = False, *
|
|||
raise ValueError(f"Model {name} does not support thinking. Set `supports_thinking` to true in the `config.yaml` to enable thinking.") from None
|
||||
model_settings_from_config.update(model_config.when_thinking_enabled)
|
||||
model_instance = model_class(**kwargs, **model_settings_from_config)
|
||||
|
||||
if is_tracing_enabled():
|
||||
try:
|
||||
from langchain_core.tracers.langchain import LangChainTracer
|
||||
|
||||
tracing_config = get_tracing_config()
|
||||
tracer = LangChainTracer(
|
||||
project_name=tracing_config.project,
|
||||
)
|
||||
existing_callbacks = model_instance.callbacks or []
|
||||
model_instance.callbacks = [*existing_callbacks, tracer]
|
||||
logger.debug(
|
||||
f"LangSmith tracing attached to model '{name}' (project='{tracing_config.project}')"
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to attach LangSmith tracing to model '{name}': {e}")
|
||||
return model_instance
|
||||
|
|
|
|||
|
|
@ -4,6 +4,9 @@ from .parser import parse_skill_file
|
|||
from .types import Skill
|
||||
|
||||
|
||||
UPLOADS_SKILLS_PATH = Path("/mnt/user-data/uploads")
|
||||
|
||||
|
||||
def get_skills_root_path() -> Path:
|
||||
"""
|
||||
Get the root path of the skills directory.
|
||||
|
|
@ -22,7 +25,9 @@ def load_skills(skills_path: Path | None = None, use_config: bool = True, enable
|
|||
"""
|
||||
Load all skills from the skills directory.
|
||||
|
||||
Scans both public and custom skill directories, parsing SKILL.md files
|
||||
Scans public/custom skill directories under the skills root, and also
|
||||
scans user uploads skill directory in the virtual personal folder
|
||||
(/mnt/user-data/uploads), parsing SKILL.md files
|
||||
to extract metadata. The enabled state is determined by the skills_state_config.json file.
|
||||
|
||||
Args:
|
||||
|
|
@ -53,9 +58,15 @@ def load_skills(skills_path: Path | None = None, use_config: bool = True, enable
|
|||
|
||||
skills = []
|
||||
|
||||
# Scan public and custom directories
|
||||
for category in ["public", "custom"]:
|
||||
category_path = skills_path / category
|
||||
# Scan public/custom directories under skills root, and uploads skills
|
||||
# under the virtual personal folder.
|
||||
scan_targets: list[tuple[str, Path]] = [
|
||||
("public", skills_path / "public"),
|
||||
("custom", skills_path / "custom"),
|
||||
("uploads", UPLOADS_SKILLS_PATH),
|
||||
]
|
||||
|
||||
for category, category_path in scan_targets:
|
||||
if not category_path.exists() or not category_path.is_dir():
|
||||
continue
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ def parse_skill_file(skill_file: Path, category: str) -> Skill | None:
|
|||
|
||||
Args:
|
||||
skill_file: Path to the SKILL.md file
|
||||
category: Category of the skill ('public' or 'custom')
|
||||
category: Category of the skill ('public', 'custom', or 'uploads')
|
||||
|
||||
Returns:
|
||||
Skill object if parsing succeeds, None otherwise
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ class Skill:
|
|||
license: str | None
|
||||
skill_dir: Path
|
||||
skill_file: Path
|
||||
category: str # 'public' or 'custom'
|
||||
category: str # 'public', 'custom', or 'uploads'
|
||||
enabled: bool = False # Whether this skill is enabled
|
||||
|
||||
@property
|
||||
|
|
@ -29,6 +29,8 @@ class Skill:
|
|||
Returns:
|
||||
Full container path to the skill directory
|
||||
"""
|
||||
if self.category == "uploads":
|
||||
return f"/mnt/user-data/uploads/{self.skill_dir.name}"
|
||||
return f"{container_base_path}/{self.category}/{self.skill_dir.name}"
|
||||
|
||||
def get_container_file_path(self, container_base_path: str = "/mnt/skills") -> str:
|
||||
|
|
@ -41,6 +43,8 @@ class Skill:
|
|||
Returns:
|
||||
Full container path to the skill's SKILL.md file
|
||||
"""
|
||||
if self.category == "uploads":
|
||||
return f"/mnt/user-data/uploads/{self.skill_dir.name}/SKILL.md"
|
||||
return f"{container_base_path}/{self.category}/{self.skill_dir.name}/SKILL.md"
|
||||
|
||||
def __repr__(self) -> str:
|
||||
|
|
|
|||
|
|
@ -0,0 +1,95 @@
|
|||
"""Regression tests for docker sandbox mode detection logic."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import subprocess
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[2]
|
||||
SCRIPT_PATH = REPO_ROOT / "scripts" / "docker.sh"
|
||||
|
||||
|
||||
def _detect_mode_with_config(config_content: str) -> str:
|
||||
"""Write config content into a temp project root and execute detect_sandbox_mode."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
tmp_root = Path(tmpdir)
|
||||
(tmp_root / "config.yaml").write_text(config_content)
|
||||
|
||||
command = (
|
||||
f"source '{SCRIPT_PATH}' && "
|
||||
f"PROJECT_ROOT='{tmp_root}' && "
|
||||
"detect_sandbox_mode"
|
||||
)
|
||||
|
||||
output = subprocess.check_output(
|
||||
["bash", "-lc", command],
|
||||
text=True,
|
||||
).strip()
|
||||
|
||||
return output
|
||||
|
||||
|
||||
def test_detect_mode_defaults_to_local_when_config_missing():
|
||||
"""No config file should default to local mode."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
command = (
|
||||
f"source '{SCRIPT_PATH}' && "
|
||||
f"PROJECT_ROOT='{tmpdir}' && "
|
||||
"detect_sandbox_mode"
|
||||
)
|
||||
output = subprocess.check_output(["bash", "-lc", command], text=True).strip()
|
||||
|
||||
assert output == "local"
|
||||
|
||||
|
||||
def test_detect_mode_local_provider():
|
||||
"""Local sandbox provider should map to local mode."""
|
||||
config = """
|
||||
sandbox:
|
||||
use: src.sandbox.local:LocalSandboxProvider
|
||||
""".strip()
|
||||
|
||||
assert _detect_mode_with_config(config) == "local"
|
||||
|
||||
|
||||
def test_detect_mode_aio_without_provisioner_url():
|
||||
"""AIO sandbox without provisioner_url should map to aio mode."""
|
||||
config = """
|
||||
sandbox:
|
||||
use: src.community.aio_sandbox:AioSandboxProvider
|
||||
""".strip()
|
||||
|
||||
assert _detect_mode_with_config(config) == "aio"
|
||||
|
||||
|
||||
def test_detect_mode_provisioner_with_url():
|
||||
"""AIO sandbox with provisioner_url should map to provisioner mode."""
|
||||
config = """
|
||||
sandbox:
|
||||
use: src.community.aio_sandbox:AioSandboxProvider
|
||||
provisioner_url: http://provisioner:8002
|
||||
""".strip()
|
||||
|
||||
assert _detect_mode_with_config(config) == "provisioner"
|
||||
|
||||
|
||||
def test_detect_mode_ignores_commented_provisioner_url():
|
||||
"""Commented provisioner_url should not activate provisioner mode."""
|
||||
config = """
|
||||
sandbox:
|
||||
use: src.community.aio_sandbox:AioSandboxProvider
|
||||
# provisioner_url: http://provisioner:8002
|
||||
""".strip()
|
||||
|
||||
assert _detect_mode_with_config(config) == "aio"
|
||||
|
||||
|
||||
def test_detect_mode_unknown_provider_falls_back_to_local():
|
||||
"""Unknown sandbox provider should default to local mode."""
|
||||
config = """
|
||||
sandbox:
|
||||
use: custom.module:UnknownProvider
|
||||
""".strip()
|
||||
|
||||
assert _detect_mode_with_config(config) == "local"
|
||||
|
|
@ -0,0 +1,121 @@
|
|||
"""Regression tests for provisioner kubeconfig path handling."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib.util
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def _load_provisioner_module():
|
||||
"""Load docker/provisioner/app.py as an importable test module."""
|
||||
repo_root = Path(__file__).resolve().parents[2]
|
||||
module_path = repo_root / "docker" / "provisioner" / "app.py"
|
||||
spec = importlib.util.spec_from_file_location("provisioner_app_test", module_path)
|
||||
assert spec is not None
|
||||
assert spec.loader is not None
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(module)
|
||||
return module
|
||||
|
||||
|
||||
def test_wait_for_kubeconfig_rejects_directory(tmp_path):
|
||||
"""Directory mount at kubeconfig path should fail fast with clear error."""
|
||||
provisioner_module = _load_provisioner_module()
|
||||
kubeconfig_dir = tmp_path / "config_dir"
|
||||
kubeconfig_dir.mkdir()
|
||||
|
||||
provisioner_module.KUBECONFIG_PATH = str(kubeconfig_dir)
|
||||
|
||||
try:
|
||||
provisioner_module._wait_for_kubeconfig(timeout=1)
|
||||
raise AssertionError("Expected RuntimeError for directory kubeconfig path")
|
||||
except RuntimeError as exc:
|
||||
assert "directory" in str(exc)
|
||||
|
||||
|
||||
def test_wait_for_kubeconfig_accepts_file(tmp_path):
|
||||
"""Regular file mount should pass readiness wait."""
|
||||
provisioner_module = _load_provisioner_module()
|
||||
kubeconfig_file = tmp_path / "config"
|
||||
kubeconfig_file.write_text("apiVersion: v1\n")
|
||||
|
||||
provisioner_module.KUBECONFIG_PATH = str(kubeconfig_file)
|
||||
|
||||
# Should return immediately without raising.
|
||||
provisioner_module._wait_for_kubeconfig(timeout=1)
|
||||
|
||||
|
||||
def test_init_k8s_client_rejects_directory_path(tmp_path):
|
||||
"""KUBECONFIG_PATH that resolves to a directory should be rejected."""
|
||||
provisioner_module = _load_provisioner_module()
|
||||
kubeconfig_dir = tmp_path / "config_dir"
|
||||
kubeconfig_dir.mkdir()
|
||||
|
||||
provisioner_module.KUBECONFIG_PATH = str(kubeconfig_dir)
|
||||
|
||||
try:
|
||||
provisioner_module._init_k8s_client()
|
||||
raise AssertionError("Expected RuntimeError for directory kubeconfig path")
|
||||
except RuntimeError as exc:
|
||||
assert "expected a file" in str(exc)
|
||||
|
||||
|
||||
def test_init_k8s_client_uses_file_kubeconfig(tmp_path, monkeypatch):
|
||||
"""When file exists, provisioner should load kubeconfig file path."""
|
||||
provisioner_module = _load_provisioner_module()
|
||||
kubeconfig_file = tmp_path / "config"
|
||||
kubeconfig_file.write_text("apiVersion: v1\n")
|
||||
|
||||
called: dict[str, object] = {}
|
||||
|
||||
def fake_load_kube_config(config_file: str):
|
||||
called["config_file"] = config_file
|
||||
|
||||
monkeypatch.setattr(
|
||||
provisioner_module.k8s_config,
|
||||
"load_kube_config",
|
||||
fake_load_kube_config,
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
provisioner_module.k8s_client,
|
||||
"CoreV1Api",
|
||||
lambda *args, **kwargs: "core-v1",
|
||||
)
|
||||
|
||||
provisioner_module.KUBECONFIG_PATH = str(kubeconfig_file)
|
||||
|
||||
result = provisioner_module._init_k8s_client()
|
||||
|
||||
assert called["config_file"] == str(kubeconfig_file)
|
||||
assert result == "core-v1"
|
||||
|
||||
|
||||
def test_init_k8s_client_falls_back_to_incluster_when_missing(
|
||||
tmp_path, monkeypatch
|
||||
):
|
||||
"""When kubeconfig file is missing, in-cluster config should be attempted."""
|
||||
provisioner_module = _load_provisioner_module()
|
||||
missing_path = tmp_path / "missing-config"
|
||||
|
||||
calls: dict[str, int] = {"incluster": 0}
|
||||
|
||||
def fake_load_incluster_config():
|
||||
calls["incluster"] += 1
|
||||
|
||||
monkeypatch.setattr(
|
||||
provisioner_module.k8s_config,
|
||||
"load_incluster_config",
|
||||
fake_load_incluster_config,
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
provisioner_module.k8s_client,
|
||||
"CoreV1Api",
|
||||
lambda *args, **kwargs: "core-v1",
|
||||
)
|
||||
|
||||
provisioner_module.KUBECONFIG_PATH = str(missing_path)
|
||||
|
||||
result = provisioner_module._init_k8s_client()
|
||||
|
||||
assert calls["incluster"] == 1
|
||||
assert result == "core-v1"
|
||||
|
|
@ -35,8 +35,8 @@ models:
|
|||
# Example: DeepSeek model (with thinking support)
|
||||
# - name: deepseek-v3
|
||||
# display_name: DeepSeek V3 (Thinking)
|
||||
# use: langchain_deepseek:ChatDeepSeek
|
||||
# model: deepseek-chat
|
||||
# use: src.models.patched_deepseek:PatchedChatDeepSeek
|
||||
# model: deepseek-reasoner
|
||||
# api_key: $DEEPSEEK_API_KEY
|
||||
# max_tokens: 16384
|
||||
# supports_thinking: true
|
||||
|
|
|
|||
|
|
@ -6,11 +6,10 @@
|
|||
# - frontend: Frontend Next.js dev server (port 3000)
|
||||
# - gateway: Backend Gateway API (port 8001)
|
||||
# - langgraph: LangGraph server (port 2024)
|
||||
# - provisioner: Sandbox provisioner (creates Pods in host Kubernetes)
|
||||
# - provisioner (optional): Sandbox provisioner (creates Pods in host Kubernetes)
|
||||
#
|
||||
# Prerequisites:
|
||||
# - Host machine must have a running Kubernetes cluster (Docker Desktop K8s,
|
||||
# minikube, kind, etc.) with kubectl configured (~/.kube/config).
|
||||
# - Kubernetes cluster + kubeconfig are only required when using provisioner mode.
|
||||
#
|
||||
# Access: http://localhost:2026
|
||||
|
||||
|
|
@ -20,6 +19,8 @@ services:
|
|||
# cluster via the K8s API.
|
||||
# Backend accesses sandboxes directly via host.docker.internal:{NodePort}.
|
||||
provisioner:
|
||||
profiles:
|
||||
- provisioner
|
||||
build:
|
||||
context: ./provisioner
|
||||
dockerfile: Dockerfile
|
||||
|
|
@ -55,25 +56,28 @@ services:
|
|||
start_period: 15s
|
||||
|
||||
# ── Reverse Proxy ──────────────────────────────────────────────────────
|
||||
# Routes API traffic to gateway, langgraph, and provisioner services.
|
||||
# Routes API traffic to gateway/langgraph and (optionally) provisioner.
|
||||
# Select nginx config via NGINX_CONF:
|
||||
# - nginx.local.conf (default): no provisioner route (local/aio modes)
|
||||
# - nginx.conf: includes provisioner route (provisioner mode)
|
||||
nginx:
|
||||
image: nginx:alpine
|
||||
container_name: deer-flow-nginx
|
||||
ports:
|
||||
- "2026:2026"
|
||||
volumes:
|
||||
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
|
||||
- ./nginx/${NGINX_CONF:-nginx.local.conf}:/etc/nginx/nginx.conf:ro
|
||||
depends_on:
|
||||
- frontend
|
||||
- gateway
|
||||
- langgraph
|
||||
- provisioner
|
||||
networks:
|
||||
- deer-flow-dev
|
||||
restart: unless-stopped
|
||||
|
||||
# Frontend - Next.js Development Server
|
||||
frontend:
|
||||
# image: deer-flow-dev-frontend:latest
|
||||
build:
|
||||
context: ../
|
||||
dockerfile: frontend/Dockerfile
|
||||
|
|
@ -101,6 +105,7 @@ services:
|
|||
|
||||
# Backend - Gateway API
|
||||
gateway:
|
||||
# image: deer-flow-dev-gateway:latest
|
||||
build:
|
||||
context: ../
|
||||
dockerfile: backend/Dockerfile
|
||||
|
|
@ -117,9 +122,13 @@ services:
|
|||
- ../backend/.deer-flow:/app/backend/.deer-flow
|
||||
# Mount uv cache for faster dependency installation
|
||||
- ~/.cache/uv:/root/.cache/uv
|
||||
# Mount Docker socket for aio sandbox
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
working_dir: /app
|
||||
environment:
|
||||
- CI=true
|
||||
# Docker environment for aio sandbox
|
||||
- DOCKER_HOST=unix:///var/run/docker.sock
|
||||
env_file:
|
||||
- ../.env
|
||||
extra_hosts:
|
||||
|
|
@ -131,27 +140,37 @@ services:
|
|||
|
||||
# Backend - LangGraph Server
|
||||
langgraph:
|
||||
# image: deer-flow-dev-langgraph:latest
|
||||
build:
|
||||
context: ../
|
||||
dockerfile: backend/Dockerfile
|
||||
cache_from:
|
||||
- type=local,src=/tmp/docker-cache-langgraph
|
||||
container_name: deer-flow-langgraph
|
||||
command: sh -c "cd backend && uv run langgraph dev --no-browser --allow-blocking --host 0.0.0.0 --port 2024 > /app/logs/langgraph.log 2>&1"
|
||||
command: sh -c "cd backend && exec uv run langgraph dev --no-browser --no-reload --allow-blocking --host 0.0.0.0 --port 2024 > /app/logs/langgraph.log 2>&1"
|
||||
volumes:
|
||||
- ../backend/src:/app/backend/src
|
||||
- ../backend/.env:/app/backend/.env
|
||||
# Persist LangGraph inmem runtime data (threads/checkpoints/store)
|
||||
- ../backend/.langgraph_api:/app/backend/.langgraph_api
|
||||
- ../config.yaml:/app/config.yaml
|
||||
- ../skills:/app/skills
|
||||
- ../logs:/app/logs
|
||||
- ../backend/.deer-flow:/app/backend/.deer-flow
|
||||
# Mount uv cache for faster dependency installation
|
||||
- ~/.cache/uv:/root/.cache/uv
|
||||
# Mount Docker socket for aio sandbox
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
working_dir: /app
|
||||
environment:
|
||||
- CI=true
|
||||
# Docker environment for aio sandbox
|
||||
- DOCKER_HOST=unix:///var/run/docker.sock
|
||||
env_file:
|
||||
- ../.env
|
||||
extra_hosts:
|
||||
# For Linux: map host.docker.internal to host gateway
|
||||
- "host.docker.internal:host-gateway"
|
||||
networks:
|
||||
- deer-flow-dev
|
||||
restart: unless-stopped
|
||||
|
|
|
|||
|
|
@ -14,17 +14,17 @@ http {
|
|||
access_log /dev/stdout;
|
||||
error_log /dev/stderr;
|
||||
|
||||
# Upstream servers (using localhost for local development)
|
||||
# Upstream servers (using Docker service names for Docker Compose)
|
||||
upstream gateway {
|
||||
server localhost:8001;
|
||||
server gateway:8001;
|
||||
}
|
||||
|
||||
upstream langgraph {
|
||||
server localhost:2024;
|
||||
server langgraph:2024;
|
||||
}
|
||||
|
||||
upstream frontend {
|
||||
server localhost:3000;
|
||||
server frontend:3000;
|
||||
}
|
||||
|
||||
server {
|
||||
|
|
|
|||
|
|
@ -188,7 +188,7 @@ kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}'
|
|||
The provisioner runs as part of the docker-compose-dev stack:
|
||||
|
||||
```bash
|
||||
# Start all services including provisioner
|
||||
# Start Docker services (provisioner starts only when config.yaml enables provisioner mode)
|
||||
make docker-start
|
||||
|
||||
# Or start just the provisioner
|
||||
|
|
@ -249,6 +249,18 @@ docker exec deer-flow-gateway curl -s $SANDBOX_URL/v1/sandbox
|
|||
- Run `kubectl config view` to verify
|
||||
- Check the volume mount in docker-compose-dev.yaml
|
||||
|
||||
### Issue: "Kubeconfig path is a directory"
|
||||
|
||||
**Cause**: The mounted `KUBECONFIG_PATH` points to a directory instead of a file.
|
||||
|
||||
**Solution**:
|
||||
- Ensure the compose mount source is a file (e.g., `~/.kube/config`) not a directory
|
||||
- Verify inside container:
|
||||
```bash
|
||||
docker exec deer-flow-provisioner ls -ld /root/.kube/config
|
||||
```
|
||||
- Expected output should indicate a regular file (`-`), not a directory (`d`)
|
||||
|
||||
### Issue: "Connection refused" to K8s API
|
||||
|
||||
**Cause**: The provisioner can't reach the K8s API server.
|
||||
|
|
|
|||
|
|
@ -80,12 +80,29 @@ def _init_k8s_client() -> k8s_client.CoreV1Api:
|
|||
Tries the mounted kubeconfig first, then falls back to in-cluster
|
||||
config (useful if the provisioner itself runs inside K8s).
|
||||
"""
|
||||
try:
|
||||
k8s_config.load_kube_config(config_file=KUBECONFIG_PATH)
|
||||
logger.info(f"Loaded kubeconfig from {KUBECONFIG_PATH}")
|
||||
except Exception:
|
||||
logger.warning("Could not load kubeconfig from file, trying in-cluster config")
|
||||
k8s_config.load_incluster_config()
|
||||
if os.path.exists(KUBECONFIG_PATH):
|
||||
if os.path.isdir(KUBECONFIG_PATH):
|
||||
raise RuntimeError(
|
||||
f"KUBECONFIG_PATH points to a directory, expected a file: {KUBECONFIG_PATH}"
|
||||
)
|
||||
try:
|
||||
k8s_config.load_kube_config(config_file=KUBECONFIG_PATH)
|
||||
logger.info(f"Loaded kubeconfig from {KUBECONFIG_PATH}")
|
||||
except Exception as exc:
|
||||
raise RuntimeError(
|
||||
f"Failed to load kubeconfig from {KUBECONFIG_PATH}: {exc}"
|
||||
) from exc
|
||||
else:
|
||||
logger.warning(
|
||||
f"Kubeconfig not found at {KUBECONFIG_PATH}; trying in-cluster config"
|
||||
)
|
||||
try:
|
||||
k8s_config.load_incluster_config()
|
||||
except Exception as exc:
|
||||
raise RuntimeError(
|
||||
"Failed to initialize Kubernetes client. "
|
||||
f"No kubeconfig at {KUBECONFIG_PATH}, and in-cluster config is unavailable: {exc}"
|
||||
) from exc
|
||||
|
||||
# When connecting from inside Docker to the host's K8s API, the
|
||||
# kubeconfig may reference ``localhost`` or ``127.0.0.1``. We
|
||||
|
|
@ -103,15 +120,27 @@ def _init_k8s_client() -> k8s_client.CoreV1Api:
|
|||
|
||||
|
||||
def _wait_for_kubeconfig(timeout: int = 30) -> None:
|
||||
"""Block until the kubeconfig file is available."""
|
||||
"""Wait for kubeconfig file if configured, then continue with fallback support."""
|
||||
deadline = time.time() + timeout
|
||||
while time.time() < deadline:
|
||||
if os.path.exists(KUBECONFIG_PATH):
|
||||
logger.info(f"Found kubeconfig at {KUBECONFIG_PATH}")
|
||||
return
|
||||
if os.path.isfile(KUBECONFIG_PATH):
|
||||
logger.info(f"Found kubeconfig file at {KUBECONFIG_PATH}")
|
||||
return
|
||||
if os.path.isdir(KUBECONFIG_PATH):
|
||||
raise RuntimeError(
|
||||
"Kubeconfig path is a directory. "
|
||||
f"Please mount a kubeconfig file at {KUBECONFIG_PATH}."
|
||||
)
|
||||
raise RuntimeError(
|
||||
f"Kubeconfig path exists but is not a regular file: {KUBECONFIG_PATH}"
|
||||
)
|
||||
logger.info(f"Waiting for kubeconfig at {KUBECONFIG_PATH} …")
|
||||
time.sleep(2)
|
||||
raise RuntimeError(f"Kubeconfig not found at {KUBECONFIG_PATH} after {timeout}s")
|
||||
logger.warning(
|
||||
f"Kubeconfig not found at {KUBECONFIG_PATH} after {timeout}s; "
|
||||
"will attempt in-cluster Kubernetes config"
|
||||
)
|
||||
|
||||
|
||||
def _ensure_namespace() -> None:
|
||||
|
|
@ -196,6 +225,31 @@ def _build_pod(sandbox_id: str, thread_id: str) -> k8s_client.V1Pod:
|
|||
},
|
||||
),
|
||||
spec=k8s_client.V1PodSpec(
|
||||
init_containers=[
|
||||
k8s_client.V1Container(
|
||||
name="init-user-data-permissions",
|
||||
image=SANDBOX_IMAGE,
|
||||
image_pull_policy="IfNotPresent",
|
||||
command=[
|
||||
"/bin/sh",
|
||||
"-c",
|
||||
"mkdir -p /mnt/user-data/workspace /mnt/user-data/uploads /mnt/user-data/outputs && chmod -R 0777 /mnt/user-data",
|
||||
],
|
||||
volume_mounts=[
|
||||
k8s_client.V1VolumeMount(
|
||||
name="user-data",
|
||||
mount_path="/mnt/user-data",
|
||||
read_only=False,
|
||||
),
|
||||
],
|
||||
security_context=k8s_client.V1SecurityContext(
|
||||
run_as_user=0,
|
||||
run_as_group=0,
|
||||
privileged=False,
|
||||
allow_privilege_escalation=False,
|
||||
),
|
||||
)
|
||||
],
|
||||
containers=[
|
||||
k8s_client.V1Container(
|
||||
name="sandbox",
|
||||
|
|
|
|||
|
|
@ -30,6 +30,7 @@ import { Welcome } from "@/components/workspace/welcome";
|
|||
import { useI18n } from "@/core/i18n/hooks";
|
||||
import { useNotification } from "@/core/notification/hooks";
|
||||
import { useLocalSettings } from "@/core/settings";
|
||||
import { bootstrapRemoteSkill } from "@/core/skills";
|
||||
import { type AgentThread, type AgentThreadState } from "@/core/threads";
|
||||
import { useSubmitThread, useThreadStream } from "@/core/threads/hooks";
|
||||
import {
|
||||
|
|
@ -80,24 +81,75 @@ export default function ChatPage() {
|
|||
}, 100);
|
||||
}
|
||||
}, [inputInitialValue]);
|
||||
const isNewThread = useMemo(
|
||||
() => threadIdFromPath === "new",
|
||||
[threadIdFromPath],
|
||||
);
|
||||
// UI mode depends only on route: /workspace/chats/new is always "new page" mode.
|
||||
const isNewThread = useMemo(() => threadIdFromPath === "new", [threadIdFromPath]);
|
||||
|
||||
// Submission strategy is controlled by `isnew` query param only.
|
||||
// - isnew=false: reuse existing thread
|
||||
// - otherwise: create/start a new session
|
||||
const createNewSession = useMemo(() => {
|
||||
if (threadIdFromPath !== "new") {
|
||||
return false;
|
||||
}
|
||||
|
||||
return searchParams.get("isnew")?.trim().toLowerCase() !== "false";
|
||||
}, [threadIdFromPath, searchParams]);
|
||||
|
||||
const uploadTarget = useMemo(() => {
|
||||
const target = searchParams.get("upload_target")?.trim().toLowerCase();
|
||||
return target === "skill" ? "skill" : undefined;
|
||||
}, [searchParams]);
|
||||
|
||||
const skillBootstrap = useMemo(() => {
|
||||
const skillIdRaw = searchParams.get("skill_id")?.trim();
|
||||
if (!skillIdRaw) return undefined;
|
||||
|
||||
const contentId = Number(skillIdRaw);
|
||||
if (!Number.isFinite(contentId)) return undefined;
|
||||
|
||||
const languageTypeRaw =
|
||||
searchParams.get("languageType")?.trim() ??
|
||||
searchParams.get("language_type")?.trim();
|
||||
const languageType = languageTypeRaw
|
||||
? Number(languageTypeRaw)
|
||||
: 0;
|
||||
|
||||
return {
|
||||
contentId,
|
||||
languageType: Number.isFinite(languageType) ? languageType : 0,
|
||||
};
|
||||
}, [threadIdFromPath, searchParams]);
|
||||
|
||||
const [threadId, setThreadId] = useState<string | null>(null);
|
||||
useEffect(() => {
|
||||
if (threadIdFromPath !== "new") {
|
||||
setThreadId(threadIdFromPath);
|
||||
} else {
|
||||
setThreadId(uuid());
|
||||
const queryThreadId = searchParams.get("thread_id")?.trim();
|
||||
setThreadId(queryThreadId || uuid());
|
||||
}
|
||||
}, [threadIdFromPath]);
|
||||
}, [threadIdFromPath, searchParams]);
|
||||
|
||||
// Runtime strategy for /new page:
|
||||
// - UI remains new-page mode
|
||||
// - if isnew=false, execute against existing thread_id without creating a new one
|
||||
const reuseExistingThread = useMemo(
|
||||
() => threadIdFromPath === "new" && !createNewSession && !!threadId,
|
||||
[threadIdFromPath, createNewSession, threadId],
|
||||
);
|
||||
|
||||
const { showNotification } = useNotification();
|
||||
const [isSkillBootstrapping, setIsSkillBootstrapping] = useState(false);
|
||||
const [skillBootstrapError, setSkillBootstrapError] = useState<string | null>(
|
||||
null,
|
||||
);
|
||||
const skillBootstrappedKeyRef = useRef<string | null>(null);
|
||||
const [finalState, setFinalState] = useState<AgentThreadState | null>(null);
|
||||
const thread = useThreadStream({
|
||||
isNewThread,
|
||||
// Keep UI in new-page mode, but runtime may reuse existing thread
|
||||
isNewThread: reuseExistingThread ? false : isNewThread,
|
||||
threadId,
|
||||
fetchStateHistory: true,
|
||||
onFinish: (state) => {
|
||||
setFinalState(state);
|
||||
if (document.hidden || !document.hasFocus()) {
|
||||
|
|
@ -133,13 +185,16 @@ export default function ChatPage() {
|
|||
return result;
|
||||
}, [thread, isNewThread]);
|
||||
|
||||
const [hasSubmitted, setHasSubmitted] = useState(false);
|
||||
const suppressExistingThreadPrefetchUi = reuseExistingThread && !hasSubmitted;
|
||||
|
||||
useEffect(() => {
|
||||
const pageTitle = isNewThread
|
||||
? t.pages.newChat
|
||||
: thread.values?.title && thread.values.title !== "Untitled"
|
||||
? thread.values.title
|
||||
: t.pages.untitled;
|
||||
if (thread.isThreadLoading) {
|
||||
if (thread.isThreadLoading && !suppressExistingThreadPrefetchUi) {
|
||||
document.title = `Loading... - ${t.pages.appName}`;
|
||||
} else {
|
||||
document.title = `${pageTitle} - ${t.pages.appName}`;
|
||||
|
|
@ -151,6 +206,7 @@ export default function ChatPage() {
|
|||
t.pages.appName,
|
||||
thread.values.title,
|
||||
thread.isThreadLoading,
|
||||
suppressExistingThreadPrefetchUi,
|
||||
]);
|
||||
|
||||
const [autoSelectFirstArtifact, setAutoSelectFirstArtifact] = useState(true);
|
||||
|
|
@ -181,10 +237,60 @@ export default function ChatPage() {
|
|||
|
||||
const [todoListCollapsed, setTodoListCollapsed] = useState(true);
|
||||
|
||||
const handleSubmit = useSubmitThread({
|
||||
useEffect(() => {
|
||||
if (!threadId || !skillBootstrap?.contentId) {
|
||||
setIsSkillBootstrapping(false);
|
||||
setSkillBootstrapError(null);
|
||||
return;
|
||||
}
|
||||
|
||||
const languageType = skillBootstrap.languageType ?? 0;
|
||||
const initKey = `${threadId}:${skillBootstrap.contentId}:${languageType}`;
|
||||
if (skillBootstrappedKeyRef.current === initKey) {
|
||||
return;
|
||||
}
|
||||
|
||||
let cancelled = false;
|
||||
|
||||
const runBootstrap = async () => {
|
||||
setIsSkillBootstrapping(true);
|
||||
setSkillBootstrapError(null);
|
||||
try {
|
||||
await bootstrapRemoteSkill({
|
||||
thread_id: threadId,
|
||||
content_id: skillBootstrap.contentId,
|
||||
language_type: languageType,
|
||||
target_dir: "/mnt/user-data/uploads/skill",
|
||||
clear_target: true,
|
||||
});
|
||||
|
||||
if (!cancelled) {
|
||||
skillBootstrappedKeyRef.current = initKey;
|
||||
setIsSkillBootstrapping(false);
|
||||
}
|
||||
} catch (error) {
|
||||
if (!cancelled) {
|
||||
const message = error instanceof Error ? error.message : "Skill 初始化失败";
|
||||
setSkillBootstrapError(message);
|
||||
setIsSkillBootstrapping(false);
|
||||
showNotification("Skill 初始化失败", { body: message });
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
void runBootstrap();
|
||||
|
||||
return () => {
|
||||
cancelled = true;
|
||||
};
|
||||
}, [threadId, skillBootstrap, showNotification]);
|
||||
|
||||
const submitThread = useSubmitThread({
|
||||
isNewThread,
|
||||
createNewSession,
|
||||
threadId,
|
||||
thread,
|
||||
uploadTarget,
|
||||
threadContext: {
|
||||
...settings.context,
|
||||
thinking_enabled: settings.context.mode !== "flash",
|
||||
|
|
@ -196,6 +302,16 @@ export default function ChatPage() {
|
|||
router.push(pathOfThread(threadId!));
|
||||
},
|
||||
});
|
||||
const handleSubmit = useCallback(
|
||||
(message: Parameters<typeof submitThread>[0]) => {
|
||||
if (isSkillBootstrapping) {
|
||||
return;
|
||||
}
|
||||
setHasSubmitted(true);
|
||||
void submitThread(message);
|
||||
},
|
||||
[isSkillBootstrapping, submitThread],
|
||||
);
|
||||
const handleStop = useCallback(async () => {
|
||||
await thread.stop();
|
||||
}, [thread]);
|
||||
|
|
@ -250,8 +366,11 @@ export default function ChatPage() {
|
|||
className={cn("size-full", !isNewThread && "pt-10")}
|
||||
threadId={threadId}
|
||||
thread={thread}
|
||||
suppressThreadLoading={suppressExistingThreadPrefetchUi}
|
||||
messagesOverride={
|
||||
!thread.isLoading && finalState?.messages
|
||||
suppressExistingThreadPrefetchUi
|
||||
? []
|
||||
: !thread.isLoading && finalState?.messages
|
||||
? (finalState.messages as Message[])
|
||||
: undefined
|
||||
}
|
||||
|
|
@ -288,18 +407,34 @@ export default function ChatPage() {
|
|||
className={cn("bg-background/5 w-full -translate-y-4")}
|
||||
isNewThread={isNewThread}
|
||||
autoFocus={isNewThread}
|
||||
status={thread.isLoading ? "streaming" : "ready"}
|
||||
status={
|
||||
suppressExistingThreadPrefetchUi
|
||||
? "ready"
|
||||
: thread.isLoading
|
||||
? "streaming"
|
||||
: "ready"
|
||||
}
|
||||
context={settings.context}
|
||||
extraHeader={
|
||||
isNewThread && <Welcome mode={settings.context.mode} />
|
||||
}
|
||||
disabled={env.NEXT_PUBLIC_STATIC_WEBSITE_ONLY === "true"}
|
||||
disabled={
|
||||
env.NEXT_PUBLIC_STATIC_WEBSITE_ONLY === "true" ||
|
||||
isSkillBootstrapping
|
||||
}
|
||||
onContextChange={(context) =>
|
||||
setSettings("context", context)
|
||||
}
|
||||
onSubmit={handleSubmit}
|
||||
onStop={handleStop}
|
||||
/>
|
||||
{(isSkillBootstrapping || skillBootstrapError) && (
|
||||
<div className="text-muted-foreground w-full translate-y-8 text-center text-xs">
|
||||
{isSkillBootstrapping
|
||||
? "正在初始化 Skill 文件..."
|
||||
: `Skill 初始化失败:${skillBootstrapError}`}
|
||||
</div>
|
||||
)}
|
||||
{env.NEXT_PUBLIC_STATIC_WEBSITE_ONLY === "true" && (
|
||||
<div className="text-muted-foreground/67 w-full translate-y-12 text-center text-xs">
|
||||
{t.common.notAvailableInDemoMode}
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
import type { Message } from "@langchain/langgraph-sdk";
|
||||
import { FileIcon } from "lucide-react";
|
||||
import { useParams } from "next/navigation";
|
||||
import { memo, useMemo, type ImgHTMLAttributes } from "react";
|
||||
import { memo, useMemo, useState, type ImgHTMLAttributes } from "react";
|
||||
import rehypeKatex from "rehype-katex";
|
||||
|
||||
import {
|
||||
|
|
@ -11,6 +11,7 @@ import {
|
|||
MessageToolbar,
|
||||
} from "@/components/ai-elements/message";
|
||||
import { Badge } from "@/components/ui/badge";
|
||||
import { Button } from "@/components/ui/button";
|
||||
import { resolveArtifactURL } from "@/core/artifacts/utils";
|
||||
import {
|
||||
extractContentFromMessage,
|
||||
|
|
@ -18,6 +19,7 @@ import {
|
|||
parseUploadedFiles,
|
||||
type UploadedFile,
|
||||
} from "@/core/messages/utils";
|
||||
import { materializeSkillYaml } from "@/core/skills";
|
||||
import { useRehypeSplitWordsIntoSpans } from "@/core/rehype";
|
||||
import { humanMessagePlugins } from "@/core/streamdown";
|
||||
import { cn } from "@/lib/utils";
|
||||
|
|
@ -221,6 +223,11 @@ function isImageFile(filename: string): boolean {
|
|||
return IMAGE_EXTENSIONS.includes(getFileExt(filename));
|
||||
}
|
||||
|
||||
function isYamlFile(filename: string): boolean {
|
||||
const ext = getFileExt(filename);
|
||||
return ext === "yaml" || ext === "yml";
|
||||
}
|
||||
|
||||
/**
|
||||
* Uploaded files list component
|
||||
*/
|
||||
|
|
@ -256,11 +263,39 @@ function UploadedFileCard({
|
|||
file: UploadedFile;
|
||||
threadId: string;
|
||||
}) {
|
||||
const [isMaterializing, setIsMaterializing] = useState(false);
|
||||
const [materializeMessage, setMaterializeMessage] = useState<string | null>(
|
||||
null,
|
||||
);
|
||||
|
||||
if (!threadId) return null;
|
||||
|
||||
const isImage = isImageFile(file.filename);
|
||||
const isYaml = isYamlFile(file.filename);
|
||||
const fileUrl = resolveArtifactURL(file.path, threadId);
|
||||
|
||||
const handleMaterializeYaml = async () => {
|
||||
if (isMaterializing) return;
|
||||
setIsMaterializing(true);
|
||||
setMaterializeMessage(null);
|
||||
try {
|
||||
const result = await materializeSkillYaml({
|
||||
thread_id: threadId,
|
||||
path: file.path,
|
||||
target_dir: "/mnt/user-data/uploads/skill",
|
||||
clear_target: true,
|
||||
});
|
||||
setMaterializeMessage(
|
||||
`已创建 ${result.created_files} 个文件 / ${result.created_directories} 个目录`,
|
||||
);
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : "解析失败";
|
||||
setMaterializeMessage(`失败: ${message}`);
|
||||
} finally {
|
||||
setIsMaterializing(false);
|
||||
}
|
||||
};
|
||||
|
||||
if (isImage) {
|
||||
return (
|
||||
<a
|
||||
|
|
@ -298,6 +333,27 @@ function UploadedFileCard({
|
|||
</Badge>
|
||||
<span className="text-muted-foreground text-[10px]">{file.size}</span>
|
||||
</div>
|
||||
{/* 注释掉测试按钮,后续根据需求再决定是否保留 */}
|
||||
{/* {isYaml && (
|
||||
<div className="mt-1 flex flex-col gap-1">
|
||||
<Button
|
||||
size="sm"
|
||||
variant="secondary"
|
||||
className="h-7 text-xs"
|
||||
onClick={() => {
|
||||
void handleMaterializeYaml();
|
||||
}}
|
||||
disabled={isMaterializing}
|
||||
>
|
||||
{isMaterializing ? "解析中..." : "一键导入为 Skill 目录"}
|
||||
</Button>
|
||||
{materializeMessage && (
|
||||
<span className="text-muted-foreground text-[10px] leading-tight">
|
||||
{materializeMessage}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
)} */}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -35,6 +35,7 @@ export function MessageList({
|
|||
threadId,
|
||||
thread,
|
||||
messagesOverride,
|
||||
suppressThreadLoading = false,
|
||||
paddingBottom = 160,
|
||||
}: {
|
||||
className?: string;
|
||||
|
|
@ -42,13 +43,14 @@ export function MessageList({
|
|||
thread: UseStream<AgentThreadState>;
|
||||
/** When set (e.g. from onFinish), use instead of thread.messages so SSE end shows complete state. */
|
||||
messagesOverride?: Message[];
|
||||
suppressThreadLoading?: boolean;
|
||||
paddingBottom?: number;
|
||||
}) {
|
||||
const { t } = useI18n();
|
||||
const rehypePlugins = useRehypeSplitWordsIntoSpans(thread.isLoading);
|
||||
const updateSubtask = useUpdateSubtask();
|
||||
const messages = messagesOverride ?? thread.messages;
|
||||
if (thread.isThreadLoading) {
|
||||
if (thread.isThreadLoading && !suppressThreadLoading) {
|
||||
return <MessageListSkeleton />;
|
||||
}
|
||||
return (
|
||||
|
|
|
|||
|
|
@ -35,6 +35,38 @@ export interface InstallSkillResponse {
|
|||
message: string;
|
||||
}
|
||||
|
||||
export interface MaterializeSkillYamlRequest {
|
||||
thread_id: string;
|
||||
path: string;
|
||||
target_dir?: string;
|
||||
clear_target?: boolean;
|
||||
}
|
||||
|
||||
export interface MaterializeSkillYamlResponse {
|
||||
success: boolean;
|
||||
target_dir: string;
|
||||
created_directories: number;
|
||||
created_files: number;
|
||||
message: string;
|
||||
}
|
||||
|
||||
export interface BootstrapRemoteSkillRequest {
|
||||
thread_id: string;
|
||||
content_id: number;
|
||||
language_type?: number;
|
||||
target_dir?: string;
|
||||
clear_target?: boolean;
|
||||
}
|
||||
|
||||
export interface BootstrapRemoteSkillResponse {
|
||||
success: boolean;
|
||||
target_dir: string;
|
||||
created_directories: number;
|
||||
created_files: number;
|
||||
sandbox_id: string;
|
||||
message: string;
|
||||
}
|
||||
|
||||
export async function installSkill(
|
||||
request: InstallSkillRequest,
|
||||
): Promise<InstallSkillResponse> {
|
||||
|
|
@ -60,3 +92,51 @@ export async function installSkill(
|
|||
|
||||
return response.json();
|
||||
}
|
||||
|
||||
export async function materializeSkillYaml(
|
||||
request: MaterializeSkillYamlRequest,
|
||||
): Promise<MaterializeSkillYamlResponse> {
|
||||
const response = await fetch(
|
||||
`${getBackendBaseURL()}/api/skills/materialize-yaml`,
|
||||
{
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
body: JSON.stringify(request),
|
||||
},
|
||||
);
|
||||
|
||||
if (!response.ok) {
|
||||
const errorData = await response.json().catch(() => ({}));
|
||||
const errorMessage =
|
||||
errorData.detail ?? `HTTP ${response.status}: ${response.statusText}`;
|
||||
throw new Error(errorMessage);
|
||||
}
|
||||
|
||||
return response.json();
|
||||
}
|
||||
|
||||
export async function bootstrapRemoteSkill(
|
||||
request: BootstrapRemoteSkillRequest,
|
||||
): Promise<BootstrapRemoteSkillResponse> {
|
||||
const response = await fetch(
|
||||
`${getBackendBaseURL()}/api/skills/bootstrap-remote`,
|
||||
{
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
body: JSON.stringify(request),
|
||||
},
|
||||
);
|
||||
|
||||
if (!response.ok) {
|
||||
const errorData = await response.json().catch(() => ({}));
|
||||
const errorMessage =
|
||||
errorData.detail ?? `HTTP ${response.status}: ${response.statusText}`;
|
||||
throw new Error(errorMessage);
|
||||
}
|
||||
|
||||
return response.json();
|
||||
}
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@ import type { PromptInputMessage } from "@/components/ai-elements/prompt-input";
|
|||
import { getAPIClient } from "../api";
|
||||
import { useUpdateSubtask } from "../tasks/context";
|
||||
import { uploadFiles } from "../uploads";
|
||||
import type { UploadTarget } from "../uploads/api";
|
||||
|
||||
import type {
|
||||
AgentThread,
|
||||
|
|
@ -20,10 +21,12 @@ import type {
|
|||
export function useThreadStream({
|
||||
threadId,
|
||||
isNewThread,
|
||||
fetchStateHistory = true,
|
||||
onFinish,
|
||||
}: {
|
||||
isNewThread: boolean;
|
||||
threadId: string | null | undefined;
|
||||
fetchStateHistory?: boolean;
|
||||
onFinish?: (state: AgentThreadState) => void;
|
||||
}) {
|
||||
const queryClient = useQueryClient();
|
||||
|
|
@ -33,7 +36,7 @@ export function useThreadStream({
|
|||
assistantId: "lead_agent",
|
||||
threadId: isNewThread ? undefined : threadId,
|
||||
reconnectOnMount: true,
|
||||
fetchStateHistory: true,
|
||||
fetchStateHistory,
|
||||
onCustomEvent(event: unknown) {
|
||||
console.info(event);
|
||||
if (
|
||||
|
|
@ -83,19 +86,40 @@ export function useSubmitThread({
|
|||
thread,
|
||||
threadContext,
|
||||
isNewThread,
|
||||
createNewSession,
|
||||
uploadTarget,
|
||||
afterSubmit,
|
||||
}: {
|
||||
isNewThread: boolean;
|
||||
createNewSession: boolean;
|
||||
threadId: string | null | undefined;
|
||||
thread: UseStream<AgentThreadState>;
|
||||
threadContext: Omit<AgentThreadContext, "thread_id">;
|
||||
uploadTarget?: UploadTarget;
|
||||
afterSubmit?: () => void;
|
||||
}) {
|
||||
const queryClient = useQueryClient();
|
||||
const apiClient = getAPIClient();
|
||||
const callback = useCallback(
|
||||
async (message: PromptInputMessage) => {
|
||||
const text = message.text.trim();
|
||||
|
||||
// Guard: ignore empty submits (avoids unintended side effects during page init).
|
||||
const hasFiles = !!(message.files && message.files.length > 0);
|
||||
if (!text && !hasFiles) {
|
||||
return;
|
||||
}
|
||||
|
||||
// For "new session" semantics, ensure the target thread id starts fresh.
|
||||
// If the same id already exists, delete it first and let submit recreate it.
|
||||
if (createNewSession && threadId) {
|
||||
try {
|
||||
await apiClient.threads.delete(threadId);
|
||||
} catch {
|
||||
// Ignore delete errors (e.g. thread does not exist yet)
|
||||
}
|
||||
}
|
||||
|
||||
// Upload files first if any
|
||||
if (message.files && message.files.length > 0) {
|
||||
try {
|
||||
|
|
@ -127,7 +151,7 @@ export function useSubmitThread({
|
|||
);
|
||||
|
||||
if (files.length > 0 && threadId) {
|
||||
await uploadFiles(threadId, files);
|
||||
await uploadFiles(threadId, files, { target: uploadTarget });
|
||||
}
|
||||
} catch (error) {
|
||||
console.error("Failed to upload files:", error);
|
||||
|
|
@ -151,7 +175,7 @@ export function useSubmitThread({
|
|||
] as HumanMessage[],
|
||||
},
|
||||
{
|
||||
threadId: isNewThread ? threadId! : undefined,
|
||||
threadId: createNewSession ? threadId! : undefined,
|
||||
streamSubgraphs: true,
|
||||
streamResumable: true,
|
||||
streamMode: ["values", "messages-tuple", "custom"],
|
||||
|
|
@ -167,7 +191,17 @@ export function useSubmitThread({
|
|||
void queryClient.invalidateQueries({ queryKey: ["threads", "search"] });
|
||||
afterSubmit?.();
|
||||
},
|
||||
[thread, isNewThread, threadId, threadContext, queryClient, afterSubmit],
|
||||
[
|
||||
thread,
|
||||
isNewThread,
|
||||
createNewSession,
|
||||
threadId,
|
||||
threadContext,
|
||||
uploadTarget,
|
||||
queryClient,
|
||||
apiClient,
|
||||
afterSubmit,
|
||||
],
|
||||
);
|
||||
return callback;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -29,12 +29,15 @@ export interface ListFilesResponse {
|
|||
count: number;
|
||||
}
|
||||
|
||||
export type UploadTarget = "skill";
|
||||
|
||||
/**
|
||||
* Upload files to a thread
|
||||
*/
|
||||
export async function uploadFiles(
|
||||
threadId: string,
|
||||
files: File[],
|
||||
options?: { target?: UploadTarget },
|
||||
): Promise<UploadResponse> {
|
||||
const formData = new FormData();
|
||||
|
||||
|
|
@ -42,6 +45,10 @@ export async function uploadFiles(
|
|||
formData.append("files", file);
|
||||
});
|
||||
|
||||
if (options?.target) {
|
||||
formData.append("upload_target", options.target);
|
||||
}
|
||||
|
||||
const response = await fetch(
|
||||
`${getBackendBaseURL()}/api/threads/${threadId}/uploads`,
|
||||
{
|
||||
|
|
|
|||
|
|
@ -0,0 +1,27 @@
|
|||
|
||||
1. 网络连接问题
|
||||
local_backend.py中使用localhost访问sandbox容器
|
||||
但在Docker容器内部,localhost指向容器自身,而不是主机
|
||||
需要改为host.docker.internal
|
||||
1. 修改网络配置
|
||||
文件: backend/src/community/aio_sandbox/local_backend.py
|
||||
|
||||
第116行: 添加sandbox_host = "host.docker.internal"
|
||||
第119行: 将sandbox_url=f"http://localhost:{port}"改为sandbox_url=f"http://{sandbox_host}:{port}"
|
||||
第166-167行: 同样修改了discover方法中的URL构建
|
||||
|
||||
|
||||
2. Docker socket挂载问题
|
||||
gateway和langgraph容器需要访问Docker守护进程来启动sandbox容器
|
||||
但容器没有挂载Docker socket
|
||||
2. 添加Docker socket挂载
|
||||
文件: docker/docker-compose-dev.yaml
|
||||
|
||||
为gateway和langgraph服务添加:
|
||||
volumes:
|
||||
# Mount Docker socket for aio sandbox
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
|
||||
environment:
|
||||
# Docker environment for aio sandbox
|
||||
- DOCKER_HOST=unix:///var/run/docker.sock
|
||||
|
|
@ -15,6 +15,51 @@ DOCKER_DIR="$PROJECT_ROOT/docker"
|
|||
# Docker Compose command with project name
|
||||
COMPOSE_CMD="docker compose -p deer-flow-dev -f docker-compose-dev.yaml"
|
||||
|
||||
detect_sandbox_mode() {
|
||||
local config_file="$PROJECT_ROOT/config.yaml"
|
||||
local sandbox_use=""
|
||||
local provisioner_url=""
|
||||
|
||||
if [ ! -f "$config_file" ]; then
|
||||
echo "local"
|
||||
return
|
||||
fi
|
||||
|
||||
sandbox_use=$(awk '
|
||||
/^[[:space:]]*sandbox:[[:space:]]*$/ { in_sandbox=1; next }
|
||||
in_sandbox && /^[^[:space:]#]/ { in_sandbox=0 }
|
||||
in_sandbox && /^[[:space:]]*use:[[:space:]]*/ {
|
||||
line=$0
|
||||
sub(/^[[:space:]]*use:[[:space:]]*/, "", line)
|
||||
print line
|
||||
exit
|
||||
}
|
||||
' "$config_file")
|
||||
|
||||
provisioner_url=$(awk '
|
||||
/^[[:space:]]*sandbox:[[:space:]]*$/ { in_sandbox=1; next }
|
||||
in_sandbox && /^[^[:space:]#]/ { in_sandbox=0 }
|
||||
in_sandbox && /^[[:space:]]*provisioner_url:[[:space:]]*/ {
|
||||
line=$0
|
||||
sub(/^[[:space:]]*provisioner_url:[[:space:]]*/, "", line)
|
||||
print line
|
||||
exit
|
||||
}
|
||||
' "$config_file")
|
||||
|
||||
if [[ "$sandbox_use" == *"src.sandbox.local:LocalSandboxProvider"* ]]; then
|
||||
echo "local"
|
||||
elif [[ "$sandbox_use" == *"src.community.aio_sandbox:AioSandboxProvider"* ]]; then
|
||||
if [ -n "$provisioner_url" ]; then
|
||||
echo "provisioner"
|
||||
else
|
||||
echo "aio"
|
||||
fi
|
||||
else
|
||||
echo "local"
|
||||
fi
|
||||
}
|
||||
|
||||
# Cleanup function for Ctrl+C
|
||||
cleanup() {
|
||||
echo ""
|
||||
|
|
@ -49,10 +94,32 @@ init() {
|
|||
|
||||
# Start Docker development environment
|
||||
start() {
|
||||
local sandbox_mode
|
||||
local nginx_conf
|
||||
local services
|
||||
|
||||
echo "=========================================="
|
||||
echo " Starting DeerFlow Docker Development"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
sandbox_mode="$(detect_sandbox_mode)"
|
||||
|
||||
if [ "$sandbox_mode" = "provisioner" ]; then
|
||||
nginx_conf="nginx.conf"
|
||||
services="frontend gateway langgraph provisioner nginx"
|
||||
else
|
||||
nginx_conf="nginx.local.conf"
|
||||
services="frontend gateway langgraph nginx"
|
||||
fi
|
||||
|
||||
echo -e "${BLUE}Detected sandbox mode: $sandbox_mode${NC}"
|
||||
if [ "$sandbox_mode" = "provisioner" ]; then
|
||||
echo -e "${BLUE}Provisioner enabled (Kubernetes mode).${NC}"
|
||||
else
|
||||
echo -e "${BLUE}Provisioner disabled (not required for this sandbox mode).${NC}"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Set DEER_FLOW_ROOT for provisioner if not already set
|
||||
if [ -z "$DEER_FLOW_ROOT" ]; then
|
||||
|
|
@ -62,7 +129,7 @@ start() {
|
|||
fi
|
||||
|
||||
echo "Building and starting containers..."
|
||||
cd "$DOCKER_DIR" && $COMPOSE_CMD up --build -d --remove-orphans
|
||||
cd "$DOCKER_DIR" && NGINX_CONF="$nginx_conf" $COMPOSE_CMD up --build -d --remove-orphans $services
|
||||
echo ""
|
||||
echo "=========================================="
|
||||
echo " DeerFlow Docker is starting!"
|
||||
|
|
@ -94,12 +161,16 @@ logs() {
|
|||
service="nginx"
|
||||
echo -e "${BLUE}Viewing nginx logs...${NC}"
|
||||
;;
|
||||
--provisioner)
|
||||
service="provisioner"
|
||||
echo -e "${BLUE}Viewing provisioner logs...${NC}"
|
||||
;;
|
||||
"")
|
||||
echo -e "${BLUE}Viewing all logs...${NC}"
|
||||
;;
|
||||
*)
|
||||
echo -e "${YELLOW}Unknown option: $1${NC}"
|
||||
echo "Usage: $0 logs [--frontend|--gateway]"
|
||||
echo "Usage: $0 logs [--frontend|--gateway|--nginx|--provisioner]"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
|
@ -138,40 +209,48 @@ help() {
|
|||
echo ""
|
||||
echo "Commands:"
|
||||
echo " init - Pull the sandbox image (speeds up first Pod startup)"
|
||||
echo " start - Start all services in Docker (localhost:2026)"
|
||||
echo " start - Start Docker services (auto-detects sandbox mode from config.yaml)"
|
||||
echo " restart - Restart all running Docker services"
|
||||
echo " logs [option] - View Docker development logs"
|
||||
echo " --frontend View frontend logs only"
|
||||
echo " --gateway View gateway logs only"
|
||||
echo " --gateway View gateway logs only"
|
||||
echo " --nginx View nginx logs only"
|
||||
echo " --provisioner View provisioner logs only"
|
||||
echo " stop - Stop Docker development services"
|
||||
echo " help - Show this help message"
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Main command dispatcher
|
||||
case "$1" in
|
||||
init)
|
||||
init
|
||||
;;
|
||||
start)
|
||||
start
|
||||
;;
|
||||
restart)
|
||||
restart
|
||||
;;
|
||||
logs)
|
||||
logs "$2"
|
||||
;;
|
||||
stop)
|
||||
stop
|
||||
;;
|
||||
help|--help|-h|"")
|
||||
help
|
||||
;;
|
||||
*)
|
||||
echo -e "${YELLOW}Unknown command: $1${NC}"
|
||||
echo ""
|
||||
help
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
main() {
|
||||
# Main command dispatcher
|
||||
case "$1" in
|
||||
init)
|
||||
init
|
||||
;;
|
||||
start)
|
||||
start
|
||||
;;
|
||||
restart)
|
||||
restart
|
||||
;;
|
||||
logs)
|
||||
logs "$2"
|
||||
;;
|
||||
stop)
|
||||
stop
|
||||
;;
|
||||
help|--help|-h|"")
|
||||
help
|
||||
;;
|
||||
*)
|
||||
echo -e "${YELLOW}Unknown command: $1${NC}"
|
||||
echo ""
|
||||
help
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
if [[ "${BASH_SOURCE[0]}" == "$0" ]]; then
|
||||
main "$@"
|
||||
fi
|
||||
|
|
|
|||
Loading…
Reference in New Issue