``` 在 Docker 沙箱中运行 NanoClaw ```
Run NanoClaw in Docker Sandboxes

原始链接: https://nanoclaw.dev/blog/nanoclaw-docker-sandboxes/

## NanoClaw 与 Docker 沙箱:安全的 AI 代理 NanoClaw 与 Docker 合作推出 Docker 沙箱,为 AI 代理提供新的安全和隔离级别。用户只需一个简单的命令(适用于 macOS 和 Windows,Linux 很快推出),即可在隔离的微型虚拟机中运行 NanoClaw 代理,每个虚拟机都有自己的内核和 Docker 守护进程,从而防止访问主机系统。 这种“不信任设计”方法确保代理程序以明确的边界运行:每个代理程序都有独立的文件系统、上下文以及对必要工具和数据的访问权限。这可以防止代理程序之间的数据泄露(例如,销售代理程序访问个人消息),并保护主机免受潜在的恶意代理程序行为。 NanoClaw 与 OpenClaw 等替代方案不同,它在操作系统和微型虚拟机层面强制执行这些边界,而不是依赖于代理指令。未来的愿景包括代理团队之间的受控上下文共享、持久的代理身份、细粒度的权限以及人工参与的审批——构建一个强大的基础设施,用于在组织内扩展 AI 代理团队。NanoClaw 是开源的,可在 GitHub 上获取。

## NanoClaw 与 AI Agent 安全问题 最近 Hacker News 上出现了一场关于 NanoClaw 的讨论,NanoClaw 是 OpenClaw 的精简实现,用于运行 LLM,特别是利用 Claude Code 进行配置。用户们赞赏 NanoClaw 的紧凑设计,但核心争论围绕着 AI agent 的*真正*安全挑战。 普遍的共识是,仅仅将 LLM 容器化是不够的保护。主要风险不是文件访问,而是 agent 访问和滥用个人数据的可能性——电子邮件、日历、银行信息——本质上是赋予它们“你生活的 root 权限”。 许多评论者强调需要**细粒度权限**,超越沙盒机制,提倡控制 agent 可以执行的操作(例如,只读访问 Gmail)。 现有的框架,如 Smith-Core,正在尝试通过策略控制来解决这个问题,但关注度有限。 讨论强调了一个脱节:安全是一个主要问题,但许多人缺乏理解来实施有效的解决方案,经常赋予 agent 过多的权限。
相关文章

原文

We announced today that we’ve partnered with Docker to enable running NanoClaw in Docker Sandboxes with one command. You can read Docker’s blog post here.

Get Started

# macOS (Apple Silicon)
curl -fsSL https://nanoclaw.dev/install-docker-sandboxes.sh | bash

# Windows (WSL)
curl -fsSL https://nanoclaw.dev/install-docker-sandboxes-windows.sh | bash

This handles the clone, setup, and Docker Sandbox configuration. You can also install manually from source.

Note: Docker Sandboxes are currently supported on macOS (Apple Silicon) and Windows (x86), with Linux support rolling out in the coming weeks.

Once it’s running, every agent gets its own isolated container inside a micro VM. No dedicated hardware needed. No complex setup.

How It Works

Docker Sandboxes run agents inside lightweight micro VMs, each with its own kernel, its own Docker daemon, and no access to your host system. This goes beyond container isolation: hypervisor-level boundaries with millisecond startup times.

NanoClaw maps onto this architecture naturally:

DOCKER SANDBOX (micro VM) Docker daemon (isolated) hypervisor-level isolation boundary Agent: #sales (Slack channel) Own filesystem Own context / memory Access: CRM, sales playbooks Tools: email, calendar Agent: #support (Slack channel) Own filesystem Own context / memory Access: docs, ticket system Tools: knowledge base, Jira Agent: #personal (WhatsApp) Own filesystem Own context / memory Access: personal calendar Tools: reminders, notes × ×

Each NanoClaw agent runs in its own container with its own filesystem, context, tools, and session. Your sales agent can’t see your personal messages. Your support agent can’t access your CRM data. These are hard boundaries enforced by the OS, not instructions given to the agent.

The micro VM layer adds a second boundary. Even if an agent somehow broke out of its container, it hits the VM wall. Your host machine, your files, your credentials, your other applications are on the other side of a hard isolation boundary.

The Security Model: Design for Distrust

I wrote about this in Don’t Trust AI Agents: when you’re building with AI agents, they should be treated as untrusted and potentially malicious. Prompt injection, model misbehavior, things nobody’s thought of yet. The right approach is architecture that assumes agents will misbehave and contains the damage when they do.

That principle drives every design decision in NanoClaw. Don’t put secrets or credentials inside the agent’s environment. Give the agent access to exactly the data and tools it needs for its job, nothing more. Keep everything else on the other side of a hard boundary.

With Docker Sandboxes, that boundary is now two layers deep. Each agent runs in its own container (can’t see other agents’ data), and all containers run inside a micro VM (can’t touch your host machine). If a hallucination or a misbehaving agent can cause a security issue, the security model is broken. Security has to be enforced outside the agentic surface, not depend on the agent behaving correctly.

OpenClaw runs on your host with access to everything. Even with their opt-in sandbox mode, all agents share the same environment. There’s no hard boundary between them. Your personal assistant can see your work agent’s data.

The right mental model: think of your agent as a colleague you want to collaborate with, but design your security as if it’s a malicious actor. Those two things aren’t contradictory. That’s just good security engineering.

What’s Next

Dario Amodei talks about “a country of geniuses in a data center.” For that to become real, new infrastructure, orchestration layers, and runtimes need to be built, purpose-built for agents operating at scale.

Today, a team can connect NanoClaw to multiple Slack channels and have separate agents handling different workloads, each isolated, each with its own context and data. But we’re heading somewhere much bigger.

Every employee will have a personal AI assistant. Every team will manage a team of agents. High-performing teams will manage hundreds. To get there, we need:

Controlled context sharing. Isolation is the foundation, but agents that work together need to share information. The hard part is the middle ground: agent teams that share all context freely within the team, but share selectively across team boundaries. You need to be able to lock everything down, control what goes in and what goes out, and then deliberately open up what should be shared. That needs to be native to the runtime, not bolted on.

Agents creating persistent agents. Not ephemeral sub-agents that spin up for a task and disappear. An agent adding a new member to its team, the way you hire someone. The new agent gets its own identity, its own persistent environment, its own data. It shows up tomorrow and remembers what it did yesterday. It accumulates context and expertise over time. This requires new primitives for identity, lifecycle management, and permission inheritance that don’t exist yet.

Fine-grained permissions and policies. Not just what tools an agent can access, but what it can do with them. Read email but not send. Access one repo but not another. Spend up to a threshold but no more.

Human-in-the-loop approvals. For irreversible actions, humans need to be in the approval chain. Agents propose, humans approve, agents execute.

NanoClaw is the secure, customizable runtime and orchestration layer for agent teams. Docker Sandboxes is the enterprise-grade infrastructure underneath. As agents move from single-player tools to full team members operating at enterprise scale, the stack that runs them needs to enforce isolation by default, enable controlled collaboration, and give organizations the visibility and governance they need. That’s what we’re building.


NanoClaw is an open-source, secure runtime and orchestration layer for agent teams. Star it on GitHub.

联系我们 contact @ memedata.com