Skip to main content
Self-hosting is not yet a “one command and you’re done” path. The production worker fleet runs on OVHCloud bare-metal with systemd — not as a Docker container you pull from Docker Hub. The fully containerized self-hosted experience (single docker compose up, zombiectl worker token for bootstrap, etc.) is on the roadmap but does not ship today. This page is a map of what works now and where to look.

Who this page is for

Operators who want one of:
  • Worker-only self-hosting — keep using the hosted control plane at api.usezombie.com, but run the worker inside your own network (homelab, on-prem, air-gapped). The “never hand over the kubeconfig” narrative on the Homelab Zombie page is this mode.
  • Full self-hosting — run the control plane + worker on your own infrastructure. Today this requires running the zombied Zig binary directly; see Architecture for the component map.
If you just want to try UseZombie, use the hosted Quickstart instead.

What ships today

ComponentTodayRoadmap
Local dev data-planedocker compose up at the repo root brings up Postgres + Redis only.Unchanged — the compose file is for local dev, not production.
Control plane (API + executor)Zig binaries. make up builds and runs them against the local compose infra. For production, see API server, Executor.Single-container production image.
Workerzombied worker binary as a systemd service on OVHCloud bare-metal. Configured via /opt/zombie/.env. See Worker for the canonical procedure.usezombie/worker:latest Docker image, bootstrap token via zombiectl worker token, suitable for homelab placement.
Worker authConfigured manually in the .env EnvironmentFile on the worker machine.zombiectl worker token lifecycle (mint, rotate, revoke) driven by the CLI.

Provisioning credentials works today

Regardless of how your worker is deployed, zombiectl credential add + the tenant vault are stable today:
zombiectl credential add kube_homelab \
  --value "$(op read 'op://homelab/k3s/kubeconfig')"
Any vault that can emit a secret to stdout works — op, vault, gcloud secrets, aws secretsmanager, even a pass entry. The --value flag reads once, encrypts immediately, and drops the plaintext. Environment variables work too, as a fallback:
export KUBE_HOMELAB="$(cat ~/.kube/config)"
zombiectl credential add kube_homelab --value "$KUBE_HOMELAB"
unset KUBE_HOMELAB
Reference the credential by name in any zombie’s TRIGGER.md:
credentials:
  - kube_homelab
skills:
  - kubectl
Read-only policy (“only get, describe, logs, top; never delete or secrets”) lives as prose in the zombie’s SKILL.md, not in structured YAML — see Homelab Zombie for the pattern.

Once the worker is reachable

With a worker online (via whichever deployment model you’re running), everything from the hosted Quickstart works identically:
zombiectl install slack-bug-fixer
cd slack-bug-fixer
zombiectl up
Grab the zombie ID from the output and fire a curl at whichever control-plane URL your worker is paired with:
curl -X POST https://<your-control-plane>/v1/webhooks/$ZOMBIE_ID \
  -H "Content-Type: application/json" \
  -d '{"event_id":"demo-001","type":"message.received","data":{}}'
Read logs:
zombiectl logs --zombie $ZOMBIE_ID --limit 50
Pagination is via --cursor <next_cursor>, printed at the end of a truncated response.

What’s next

Architecture

Component map, ports, process boundaries. Start here for full self-hosting.

Worker deployment

The canonical systemd-on-bare-metal procedure that production uses today.

API server

Running the control-plane API binary.

Executor

The sandbox sidecar that owns run lifecycle.

Security posture

Sandbox details, credential firewall, worker isolation.

Zombie lifecycle

The commands you’ll run day-to-day against your self-hosted stack.