deerflow2/backend/packages/harness/deerflow/models
NmanQAQ dd30e609f7
feat(models): add vLLM provider support (#1860)
support for vLLM 0.19.0 OpenAI-compatible chat endpoints and fixes the Qwen reasoning toggle so flash mode can actually disable thinking.

Co-authored-by: NmanQAQ <normangyao@qq.com>
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
2026-04-06 15:18:34 +08:00
..
__init__.py refactor: split backend into harness (deerflow.*) and app (app.*) (#1131) 2026-03-14 22:55:52 +08:00
claude_provider.py fix(oauth): Harden Claude OAuth cache-control handling (#1583) 2026-03-30 07:41:18 +08:00
credential_loader.py Fix Windows backend test compatibility (#1384) 2026-03-26 17:39:16 +08:00
factory.py feat(models): add vLLM provider support (#1860) 2026-04-06 15:18:34 +08:00
openai_codex_provider.py feat: add Claude Code OAuth and Codex CLI as LLM providers (#1166) 2026-03-22 22:39:50 +08:00
patched_deepseek.py refactor: split backend into harness (deerflow.*) and app (app.*) (#1131) 2026-03-14 22:55:52 +08:00
patched_minimax.py feat(harness): integration ACP agent tool (#1344) 2026-03-26 14:20:18 +08:00
patched_openai.py ci: enforce code formatting checks for backend and frontend (#1536) 2026-03-29 15:34:38 +08:00
vllm_provider.py feat(models): add vLLM provider support (#1860) 2026-04-06 15:18:34 +08:00