LoongClaw is a secure, extensible, and evolvable claw baseline built in Rust.
It starts from assistant capabilities, but it is not meant to stop at being a general assistant. Over time, it is designed to grow into a foundation for team-facing vertical agents, where people and AI can keep collaborating and evolving together.
Why Loong • Positioning • Advantages • Contributing • Quick Start • Migration • Capabilities • Architecture • Docs
We chose Loong deliberately.
Loong refers to the Chinese dragon. In our context, it is less about conquest or aggression and closer to a form of strength shaped by vitality, balance, imagination, and coexistence. That feels much closer to the spirit we want LoongClaw to carry.
LoongClaw is not meant to stop at being another generic claw. We want it to grow with people, teams, and real working contexts, and over time become a reliable foundation for vertical agents. For us, Loong is not only a name. It also reflects the way we want to work: respect differences, stay open, practice reciprocity, think long-term, and stay grounded.
We want the community around LoongClaw to carry the same feeling: less noise, less posturing, and more cooperation around real problems. If contributors, users, and partners can trust one another and build useful things together, that matters more to us.
LoongClaw today is no longer just a thin shell around a model endpoint. It is a Rust-built claw baseline with explicit boundaries and room to keep taking shape. If you only look at entry commands like onboard, ask, or chat, you miss the more important story: the codebase already contains several layers that matter to teams.
| Core capability | What is already real | Why it matters |
|---|---|---|
| Governance-native execution | capability tokens, policy decisions, approval requests, and audit events already sit in critical execution paths | this is much closer to a team system than to a single-user demo |
| Explicit execution planes | connector, runtime, tool, and memory are separate kernel planes with symmetric core / extension registration | vertical shaping can replace planes instead of repeatedly rewriting the kernel |
| Separate control plane | ACP already exists as its own control plane across backend, binding, registry, runtime, analytics, and store modules | future routing, collaboration, and richer agent lifecycle work have a place to live |
| Shapeable context | the context engine already has bootstrap, ingest, after_turn, compact_context, and subagent hooks | context and memory are not hardcoded into a single prompt builder |
| Runtime-truthful tool surface | the tool catalog carries risk classes, approval modes, and Runtime / Planned visibility | what users see is closer to what the system can actually do right now |
| Migration-aware setup | onboard can detect current setup, Codex config, environment, and workspace guidance; the public migration CLI is now loongclaw migrate | teams do not have to rebuild configuration and long-lived context from scratch |
| Multi-surface delivery | beyond CLI, Telegram, Feishu / Lark, and Matrix already exist as runtime-backed surfaces with typed config, routing, and security validation | the product already reaches beyond a local terminal-only experiment |
That is why we increasingly describe LoongClaw as an early foundation for vertical agents. The governance boundary, extension boundary, and delivery boundary are already visible today.
The vision goes well beyond a personal assistant.
Our vision is to make LoongClaw a foundation for vertical agents: more focused than a general assistant, more controllable, and better suited for real team workflows. We want teams to build and evolve those agents faster through low-code or zero-code workflows on top of a stable core and explicit extension seams, instead of rebuilding the system from scratch each time.
That direction does not stop at software-only agent workflows. Over time, we also care about hardware, robotics, and embodied intelligence as natural extensions of the same foundation. The goal is not only to connect models to chat surfaces, but to grow a base layer that can eventually bridge digital systems and real-world action.
If you place LoongClaw against a few common AI-agent product shapes, it sits between a runnable assistant baseline and a governed vertical-agent base. The important difference is that it starts solving team problems earlier instead of postponing them.
| Design orientation | Assistant-first products | Framework-first products | LoongClaw |
|---|---|---|---|
| Starting point | optimize single-user chat experience first | offer a flexible but relatively empty builder layer first | ship a runnable baseline while bringing in team-facing boundaries early |
| Governance | often added through perimeter systems later | possible, but usually requires extra integration work | policy, approval, and audit are modeled inside critical execution paths |
| Extension model | often grows through plugins and scripts later | highly flexible, but each team may rebuild its own stack | extend through planes, adapters, packs, and channels with clearer boundaries |
| Delivery surfaces | often stop at CLI or a single chat UI | often thin on built-in delivery surfaces | CLI, Telegram, Feishu / Lark, and Matrix are already real delivery surfaces |
| Vertical evolution | can stall at being "a better assistant" | can stall at "you can build it yourself" | aims to keep shaping vertical agents on top of a stable Rust base |
| Long-term edge | usually software-assistant-centric | usually orchestration-centric | leaves room for hardware, robotics, and embodied intelligence over time |
The install script prefers the matching GitHub Release binary, verifies its SHA256 checksum, installs loongclaw, and can drop you straight into guided onboarding.
When you pass --onboard, the installer now seeds onboarding with a recommended web search default. It keeps DuckDuckGo as the general key-free fallback, and prefers Tavily when domestic Chinese locale/network hints suggest that direct DuckDuckGo access may be a worse default. If the shell already exposes exactly one ready credential-backed search provider such as PERPLEXITY_API_KEY or TAVILY_API_KEY, the installer prefers that provider before falling back to locale and route heuristics.
On Linux x86_64, the installer now treats GNU and musl as distinct release artifacts:
- it prefers
x86_64-unknown-linux-gnuwhen the host glibc satisfies the declared GNU floor - it falls back to
x86_64-unknown-linux-muslwhen glibc is too old or cannot be detected - you can override the default with
--target-libc gnu|muslorLOONGCLAW_INSTALL_TARGET_LIBC
Linux / macOS
curl -fsSL https://raw.githubusercontent.com/loongclaw-ai/loongclaw/dev/scripts/install.sh | bash -s -- --onboardcurl -fsSL https://raw.githubusercontent.com/loongclaw-ai/loongclaw/dev/scripts/install.sh | bash -s -- --target-libc muslWindows (PowerShell)
$script = Join-Path $env:TEMP "loongclaw-install.ps1" Invoke-WebRequest https://raw.githubusercontent.com/loongclaw-ai/loongclaw/dev/scripts/install.ps1 -OutFile $script pwsh $script -OnboardSource install
bash scripts/install.sh --source --onboardpwsh ./scripts/install.ps1 -Source -Onboardcargo install --path crates/daemonloongclaw completions <shell> prints a completion script to stdout. GitHub releases also publish pre-generated completion files if you prefer to download them instead of generating them locally.
Install shell completion
loongclaw completions bash >> ~/.bash_completion source ~/.bash_completionloongclaw completions zsh > "${fpath[1]}/_loongclaw"loongclaw completions fish > ~/.config/fish/completions/loongclaw.fishloongclaw completions powershell >> $PROFILEloongclaw completions elvish >> ~/.config/elvish/rc.elv-
Run guided onboarding:
loongclaw onboard
-
Set the provider credential that onboarding selected:
export PROVIDER_API_KEY=sk-...If you are using Volcengine, follow the example in the Configuration section below.
-
Get a first answer:
loongclaw ask --message "Summarize this repository and suggest the best next step." -
Continue in session when you need follow-up work:
loongclaw chat
-
Repair local health issues when needed:
loongclaw doctor --fix
-
Inspect the retained audit window when you need debugging evidence:
loongclaw audit recent --limit 20 loongclaw audit summary --limit 200 --json
Channel setup comes after the base CLI path is healthy.
LoongClaw ships a built-in developer observability lane for kernel-backed debugging and review. The app runtime writes audit events to ~/.loongclaw/audit/events.jsonl by default with [audit].mode = "fanout", so policy denials, token lifecycle events, and other security-critical evidence survive process restarts.
loongclaw doctor --config ~/.loongclaw/config.toml loongclaw doctor --config ~/.loongclaw/config.toml --json loongclaw audit recent --config ~/.loongclaw/config.toml loongclaw audit summary --config ~/.loongclaw/config.toml loongclaw audit recent --config ~/.loongclaw/config.toml --json if [ -f ~/.loongclaw/audit/events.jsonl ]; then tail -n 20 ~/.loongclaw/audit/events.jsonl; else echo "audit journal is created on first audit write"; fidoctor now surfaces audit retention mode and journal directory readiness in addition to the existing runtime checks. For durable modes (fanout or jsonl), LoongClaw will create the journal directory on first write, and doctor --fix can pre-create it when you want a clean preflight. Use audit recent when you want the bounded last-N event window and audit summary when you want a compact kind/count rollup plus last-seen fields. Raw tail remains a fallback when you need the original JSONL lines.
When provider model probing fails before any HTTP status is returned, doctor now adds a provider route probe for the active request/models host. That probe surfaces the host and port, DNS resolution results, fake-ip-style addresses, and a short TCP reachability check so you can separate local proxy/TUN/fake-ip instability from true upstream unavailability.
1. Web UI
We are currently building the first usable local LoongClaw Web UI.
It is an optional install surface, and the current scope includes:
- chat
- dashboard
- onboarding
The initial product mode stays same-origin and local by default.
This surface is still evolving and should be understood as an active MVP rather than a fully finished product interface.
If you would like to help us continue improving it, please switch to the web branch and share feedback there.
loongclaw onboard uses provider.api_key = { env = "..." } to reference provider credentials, so secrets stay outside the config file:
active_provider = "openai" [providers.openai] kind = "openai" api_key = { env = "PROVIDER_API_KEY" }Guided onboarding now also lets you choose the default web search backend. Supported providers are duckduckgo, brave, tavily, perplexity, exa, and jina. If you keep the default choice, LoongClaw uses DuckDuckGo for the general case, or Tavily when domestic Chinese locale/network hints suggest it is the safer first-run default. When the selected provider requires a key, onboarding immediately asks which environment variable should back that credential and writes the config as an env reference such as "${TAVILY_API_KEY}", instead of asking users to paste the secret inline. Non-interactive onboarding also accepts --web-search-provider <provider> and --web-search-api-key <ENV_NAME>. Explicit choices stay explicit: LoongClaw no longer silently falls back to DuckDuckGo when the operator explicitly selected a credential-backed provider.
[tools.web_search] default_provider = "duckduckgo" # brave_api_key = "${BRAVE_API_KEY}" # tavily_api_key = "${TAVILY_API_KEY}" # perplexity_api_key = "${PERPLEXITY_API_KEY}" # exa_api_key = "${EXA_API_KEY}" # jina_api_key = "${JINA_API_KEY}" # or "${JINA_AUTH_TOKEN}"Volcengine / ARK example:
export ARK_API_KEY=your-ark-api-keyactive_provider = "volcengine" [providers.volcengine] kind = "volcengine" model = "your-coding-plan-model-id" api_key = { env = "ARK_API_KEY" } base_url = "https://ark.cn-beijing.volces.com" chat_completions_path = "/api/v3/chat/completions"Both volcengine and volcengine_coding use api_key = { env = "ARK_API_KEY" }. LoongClaw resolves that environment variable and sends it as Authorization: Bearer <ARK_API_KEY> on the OpenAI-compatible Volcengine path; AK/SK request signing is not used there.
Feishu channel example (webhook mode):
export FEISHU_APP_ID=cli_your_app_id export FEISHU_APP_SECRET=your_app_secret export FEISHU_VERIFICATION_TOKEN=your_verification_token export FEISHU_ENCRYPT_KEY=your_encrypt_key[feishu] enabled = true receive_id_type = "chat_id" webhook_bind = "127.0.0.1:8080" webhook_path = "/feishu/events" allowed_chat_ids = ["oc_your_chat_id"]loongclaw feishu-serve --config ~/.loongclaw/config.tomlLoongClaw defaults to mode = "webhook" and reads FEISHU_APP_ID, FEISHU_APP_SECRET, FEISHU_VERIFICATION_TOKEN, and FEISHU_ENCRYPT_KEY.
Feishu channel example (websocket mode):
export FEISHU_APP_ID=cli_your_app_id export FEISHU_APP_SECRET=your_app_secret[feishu] enabled = true mode = "websocket" receive_id_type = "chat_id" allowed_chat_ids = ["oc_your_chat_id"]loongclaw feishu-serve --config ~/.loongclaw/config.tomlWebhook secrets are not required in websocket mode. If you are targeting Lark instead of Feishu, add domain = "lark".
Matrix channel example:
export MATRIX_ACCESS_TOKEN=your_matrix_access_token[matrix] enabled = true user_id = "@ops-bot:example.org" base_url = "https://matrix.example.org" allowed_room_ids = ["!ops:example.org"]loongclaw matrix-serve --config ~/.loongclaw/config.toml --onceBy default, LoongClaw reads MATRIX_ACCESS_TOKEN. Matrix room and user IDs often contain :, so the runtime preserves structured Matrix route/session IDs without relying on Matrix-specific path hacks.
Use multi-channel-serve when you want one process to keep an interactive CLI session in the foreground while supervising every enabled runtime-backed service channel in the same runtime.
loongclaw multi-channel-serve \ --session cli-supervisor \ --channel-account telegram=bot_123456 \ --channel-account lark=alerts \ --channel-account matrix=bridge-sync \ --channel-account wecom=robot-prod \ --config ~/.loongclaw/config.toml--session is required. Repeat --channel-account <CHANNEL=ACCOUNT> to pin specific channel accounts. LoongClaw normalizes runtime-backed aliases such as lark to canonical channel ids and only supervises runtime-backed channels that are enabled in the loaded config.
loongclaw channels --json exposes the broader channel catalog separately from shipped runtime-backed surfaces. Planned surfaces already modeled in the catalog include Discord, Slack, LINE, DingTalk, WhatsApp, Google Chat, Signal, Synology Chat, Tlon, iMessage / BlueBubbles, Nostr, Twitch, Zalo, and WebChat, but they do not claim runtime support until an adapter is actually shipped.
Tool policy stays explicit:
[tools] shell_default_mode = "deny" shell_allow = ["echo", "ls", "git", "cargo"] [tools.browser] enabled = true max_sessions = 8 [tools.web] enabled = true allowed_domains = ["docs.example.com"] blocked_domains = ["*.internal.example"] [tools.web_search] enabled = true default_provider = "duckduckgo" # or "ddg", "brave", "tavily", "perplexity", "exa", "jina" timeout_seconds = 30 max_results = 5 # brave_api_key = "${BRAVE_API_KEY}" # tavily_api_key = "${TAVILY_API_KEY}" # perplexity_api_key = "${PERPLEXITY_API_KEY}" # exa_api_key = "${EXA_API_KEY}" # jina_api_key = "${JINA_API_KEY}" # or "${JINA_AUTH_TOKEN}"Further references:
default_provideracceptsduckduckgo(orddg),brave,tavily,perplexity(orperplexity_search),exa, andjina(orjinaai/jina-ai)BRAVE_API_KEY,TAVILY_API_KEY,PERPLEXITY_API_KEY,EXA_API_KEY,JINA_API_KEY, andJINA_AUTH_TOKENstay supported as environment fallbacks- Tool Surface Spec
- Product Specs
loongclaw validate-config --config ~/.loongclaw/config.toml --json
LoongClaw does not assume teams should start from zero.
Today there are two migration-facing paths:
onboardalready folds current setup, Codex config, environment settings, and workspace guidance into starting-point detection, then suggests a reusable starting point.- when you want explicit control, the public migration entrypoint is now
loongclaw migrate, which handles discovery, planning, selective apply, and rollback.
Its value is broader than copying a config file. LoongClaw distinguishes sources, recommends a primary source, and keeps migration split into narrower lanes such as prompt, profile, and external-skills state instead of blindly overwriting everything at once.
# Discover migration candidates under a root loongclaw migrate --mode discover --input ~/legacy-claws # Plan all sources and print a recommended primary source loongclaw migrate --mode plan_many --input ~/legacy-claws # Apply one selected source to a target config loongclaw migrate --mode apply_selected --input ~/legacy-claws \ --source-id openclaw --output ~/.loongclaw/config.toml --force # Apply one selected source and bridge installable local external skills loongclaw migrate --mode apply_selected --input ~/legacy-claws \ --source-id openclaw --output ~/.loongclaw/config.toml \ --apply-external-skills-plan --force # Roll back the most recent migration loongclaw migrate --mode rollback_last_apply --output ~/.loongclaw/config.tomlDeeper migration modes also exist, including merge_profiles for multi-source profile merging and map_external_skills for external-skills artifact mapping. The bridge remains opt-in: prompt/profile import still works by default, while --apply-external-skills-plan adds installable local skill directories to the managed runtime without replacing unrelated managed skills.
LoongClaw's external-skills runtime is operator-visible now instead of staying hidden behind migration helpers.
# Inspect resolved managed, user, and project skills with eligibility + invocation metadata loongclaw skills list loongclaw skills info release-guard # Download a remote skill package under the external-skills policy boundary loongclaw skills fetch https://skills.sh/release-guard.tgz --approve-download # Download and sync a remote package into the managed runtime in one step loongclaw skills fetch https://skills.sh/release-guard.tgz \ --approve-download --install --replaceloongclaw skills list and loongclaw skills info surface per-skill metadata such as invocation_policy, required env or binaries, required runtime config gates, and declared tool restrictions. loongclaw skills fetch --install --replace gives operators a thin update path over the existing managed install lifecycle without bypassing the same runtime policy checks that govern downloads and installed skill execution.
- the kernel already carries governance primitives such as capability tokens, authorization, revocation, and audit events
- the tool catalog has built-in risk classes, approval modes, and runtime visibility, so higher-risk actions can move through an approval path
- browser and web tooling share the same controlled network boundary, and external skills stay opt-in under explicit policy
- the kernel is split into four execution planes:
connector,runtime,tool, andmemory - each plane supports a core / extension adapter structure, so specialization goes through explicit seams instead of ad-hoc kernel edits
- providers, tools, memory, channels, and packs can evolve on top of those boundaries
- the context engine includes
bootstrap,ingest,after_turn,compact_context, and subagent lifecycle hooks - ACP acts as a separate control plane for backend, binding, registry, runtime, and related coordination work
- profiles, summaries, migration, and canonical history together support long-lived context
- CLI is first-class today, but it is no longer the only surface
- Telegram, Feishu / Lark, and Matrix already exist as real channel surfaces with runtime state and security validation
- browser, file, shell, and web tools are exposed through runtime policy rather than left in scattered helper scripts
LoongClaw is organized as a 7-crate Rust workspace with a strict dependency DAG:
contracts (leaf -- zero internal deps) ├── kernel --> contracts ├── protocol (independent leaf) ├── app --> contracts, kernel ├── spec --> contracts, kernel, protocol ├── bench --> contracts, kernel, spec └── daemon (binary) --> all of the above | Crate | Role |
|---|---|
contracts | Stable shared ABI surface |
kernel | Policy, audit, capability, pack, and governance core |
protocol | Typed transport and routing contracts |
app | Providers, tools, channels, memory, and conversation runtime |
spec | Deterministic execution specs |
bench | Benchmark harness and gates |
daemon | Runnable CLI binary and operator-facing commands |
Three design rules matter most:
- governance-first: policy, approvals, and audit are modeled in critical execution paths rather than bolted on later
- additive evolution: public contracts grow without breaking existing integrations
- small core, rich seams: specialization should happen through adapters and packs, not by mutating the kernel every time
- Small kernel, explicit boundaries:
contracts,kernel,protocol, andappare separated so transport, policy, runtime, and product surfaces can evolve without tangling the core. - Core / Extension approach: runtime, tool, memory, and connector surfaces are organized around trusted cores with richer extension layers, so specialization goes through adapters instead of kernel forks.
- Control planes stay distinct: provider turns, context assembly, channel routing, and ACP control behavior are modeled as separate concerns, which keeps future collaboration and routing upgrades from forcing a rewrite of the conversation core.
- Governance is not an afterthought: capability checks, policy gates, approvals, and audit trails are part of the main execution path rather than a perimeter feature added later.
- The product layer is already concrete: a CLI-first entry path, Telegram / Feishu / Matrix channels, browser / file / shell / web tools, and configurable provider / memory / tool-policy baselines already form a real path through the current system.
Some ecosystem pieces are still better described as architecture direction than as finished product surfaces, and we prefer to say that plainly in the README.
For the full layered execution model, see ARCHITECTURE.md and Layered Kernel Design.
| Document | Description |
|---|---|
| Architecture | Crate map and layered execution overview |
| Core Beliefs | Core engineering principles |
| Roadmap | Stage-based milestones and direction |
| Product Sense | Current product contract and user journey |
| Product Specs | User-facing requirements for onboarding, ask, doctor, channels, and memory |
| Contribution Areas | The kinds of design, engineering, docs, and community help that would make the biggest difference right now |
| Reliability | Build and kernel invariants |
| Security | Security policy and disclosure path |
| Changelog | Release history |
Contributions are welcome. See CONTRIBUTING.md for the full workflow.
If you want to see the areas where help is especially welcome, start with Contribution Areas We Especially Welcome.
LoongClaw is licensed under the MIT License.
Copyright (c) 2026 LoongClaw AI