Source attribution. The narrative on this page follows
docs/brainstormed/docs/homelab-zombie-launch.md. The executable Homelab Zombie ships at samples/homelab/ in the usezombie repo as one SKILL.md + one TRIGGER.md — no sub-skill directories, no YAML allowlists. Authored by M33.kubectl and Docker access, but in a way designed to be safe to run unsupervised at 11pm on a bad day.
What it does
You talk to it:Three things that make it safe to run
1. The allowlist lives in prose, inside the SKILL.md prompt
The zombie’sSKILL.md is a markdown file the agent reads as its system prompt. It tells the agent, in plain English, which kubectl verbs to use and which to avoid:
Use onlyThe agent reasons with that policy the same way it reasons with any other instruction. When it produces a command that breaks the rule, the next loop’s self-reflection flags it; when nullclaw (the tool-enforcement layer) lands, the same prose is the contract the structural enforcer will gate against. YAGNI says don’t build the structural gate until a second zombie needs the same policy — today the homelab zombie is the only one, so prose is the whole spec.kubectl get,describe,logs,top, andevents. Never rundelete,apply,patch, or any write verb. Never readsecrets— even ageton a secret object is forbidden. If you need to understand an object that you can only see through a write verb, stop and report instead of attempting it.
secrets denial is called out explicitly because log exfiltration via secrets is the first thing a bad agent run would try. The prompt names it. That is the allowlist.
2. The agent never holds the kubeconfig
In most agent-plus-kubectl setups, the agent is a process withKUBECONFIG set in its environment — the LLM context, in theory, can be prompt-injected into echoing the token. This has been demonstrated against Claude Code and various self-hosted agents. It’s a real attack surface.
In UseZombie, the agent process literally does not have the credential. What it has is a placeholder string — a random UUID that looks nothing like a token. When kubectl inside the sandbox makes an HTTPS call to the cluster API, a proxy at the network boundary catches it, swaps the placeholder for the real credential, and re-originates the request. The real token never enters the memory of the process the LLM is driving.
Short-lived tokens in env vars still appear in prompt-injection exfiltration paths; placeholders don’t. The model can repeat the placeholder all day — it does nothing on its own.
3. The worker runs in your network, not ours
The control plane coordinates runs and stores audit logs. But the worker that actually executes tool calls runs on a box inside your homelab — a small Docker container on your k3s control plane node, a Pi behind your firewall, wherever you choose. The control plane never has a route to your k3s API. Your kubeconfig never leaves your network. If you pull the plug on the worker container, the whole thing stops.Install
homelab/ directory with exactly two files: SKILL.md (agent instructions + the read-only policy in prose) and TRIGGER.md (tool wiring, credential references, budget, network allowlist). No sub-skill directories.
For the full end-to-end — worker placement, Tailscale integration, cred add — follow the Operator quickstart.
What’s next
- v0.1 (current): read-only diagnosis. The output is a report with a proposed fix.
- v0.2 (next): writes behind approval gates. The agent proposes a
kubectl patch, pushes the proposal to your phone via Slack, and waits for a tap.