Pollen is a self-organising mesh and WASM runtime written in pure Go. Workloads are "seeded" into the cluster and organically scale and follow load. There is no central coordinator; decisions are made deterministically, locally, using a gossiped CRDT runtime state as their source of truth. Same view of the world; same workload placement and routing.
The goal is for Pollen to turn a collection of heterogeneous machines into a blob of generic compute that can run absolutely anywhere. Think: a Raspberry Pi acting as though it has the power of a server-farm.
This demo shows a simple processing pipeline: two chained workloads and a single "sink" egress server running on my home laptop (all requests end up here). 10 freshly provisioned (global) nodes are bootstrapped into the cluster, workloads are seeded, and ~4,000req/s calls spread across 5 locations simultaneously. The scale-up and workload placement all happens organically. The nodes gate and apply backpressure and gossip saturation across the cluster so other nodes know where to direct traffic. Pausable video at pln.sh.
- WASM seeds.
pln seed ./hello.wasmhere,pln call hello greetthere; artifacts distribute peer-to-peer by hash. One host call invokes another seed by name (pln://seed/<name>/<fn>), so authz, routing, and policy can live inside WASM. Authored in Go, Rust, JS, Python, C#, Zig via Extism. - Mesh services.
pln serve 8080 apihere,pln connect apithere (orpln://service/<name>from a seed). TCP and UDP, end-to-end mTLS. - Static sites & blobs.
pln seed ./publicpublishes a site;pln seed ./fileshares a file. Same verb across workloads, sites, and blobs; kind is autodetected from what you point at. Content-addressed, gossiped, streamed peer-to-peer over QUIC. - Self-organising. No scheduler, no leader, no coordinator. Topology, placement, and routing emerge from local state; calls go to the nearest, least-loaded replica, and replicas migrate toward demand.
- CRDT-native. A converging document on every node; changes gossip, conflicts resolve.
- Partition-tolerant. Both sides of a split keep running; state converges on rejoin; survivors rehost workloads from failed nodes.
- QUIC transport. One multiplexed, encrypted, UDP-based connection per peer carries gossip, services, and seeds. Connections punch direct between peers; otherwise they relay through any cluster node both peers can reach.
- Cryptographic admission. No shared secrets, no firewall rules. Every link is mTLS.
- Edge-ready. Pure Go, no CGO. Raspberry Pi to cloud host.
- Ergonomic. Opinionated defaults, opt-in configuration.
Full docs at docs.pln.sh:
- Quickstart — install, cluster, workload, call.
- Concepts — the model: mesh, gossiped CRDT, placement, storage, capabilities.
- How-to — recipes for relays, offline root, property-based access, rollouts.
- CLI reference — every command, every flag.
- Troubleshoot — symptom → cause → fix.
The rest of this README is a condensed tour. See the docs for the full picture.
curl -fsSL https://pln.sh/install.sh | bashA thin wrapper around your platform's package manager (Homebrew on
macOS, apt/dnf/yum/zypper on Linux), so upgrades, uninstalls, and
service files are managed natively. On Linux, the installer reads
/etc/os-release and refuses to guess on unknown distros; pass
--method tarball to opt in to a /usr/local/bin binary install with
the same daemon provisioning. On macOS, see the FAQ for a
first-connect permissions note.
pln init # creates a new cluster rooted here
pln bootstrap ssh user@host [--publisher|--admin] # requires passwordless SSH + sudoYou have a zero-trust mesh, a peer-to-peer artifact store, and a WASM
runtime. Public nodes automatically become relays, so the mesh handles
NAT traversal without configuration. Pass --publisher to let the new
node publish workloads and services, or --admin for full delegation
authority so your root machine doesn't need to stay online.
With SSH. From any admin node:
pln bootstrap ssh user@host [--publisher|--admin] [--prop region=eu]
# Or pipe labelled targets from stdin or a file:
echo "media=alice@10.0.0.5" | pln bootstrap ssh -Installs Pollen, enrols in the cluster, and starts. Linux targets only;
needs SSH as root or passwordless sudo. The default tier is leaf
(consume only); --publisher allows the joiner to publish, --admin
delegates full admin authority. --prop bakes properties into each
joiner's cert at issue time; prefix a target with name= to label the
node. Run pln bootstrap ssh --help for the full flag set.
Out-of-band. Mint a token on an admin node, ship it to the joiner:
# Admin node:
pln invite [--publisher|--admin] [--subject foo] # subject is the joiner's `pln id`
# New node:
pln join <token>The token is self-contained: signed admission credentials, the cluster's root key, and every public relay address the cluster has organically learned. Any public node you've bootstrapped is already acting as a relay, and its address is woven into new invites automatically, so a joiner behind NAT has a route in without you plumbing anything. Ship the token over any channel; it's signed and valid until its TTL expires.
# Machine A:
pln serve 8080 api
# Machine B:
pln connect api
curl localhost:8080 # served from A, over the meshTCP and UDP. Connections punch directly if both peers can reach each other, and relay over the shortest mesh path otherwise. No ingress controller, no DNS, no port forwarding.
pln seed ./hello.wasm
pln call hello greet '{"name":"world"}'pln seed publishes a WASM binary into the cluster. Nodes decide
locally whether to claim a replica, scoring themselves on available
capacity, cached artifacts, and proximity to traffic. There is no central
scheduler. When a node goes down, survivors pick up the slack.
Publishing workloads, static sites, named blobs, and services requires
the publisher capability (or admin, which is a strict superset). Each
published resource carries a signed cert anchored to the publisher's
delegation cert.
Example modules live in examples/. Run pln --help for
the full CLI reference.
Seeds that only delegate to another seed can return a tail-call marker
instead of making a synchronous pollen_request host call:
{"kind":"tail_call","uri":"pln://seed/upper/handle","input":"aGVsbG8="}input is standard base64-encoded bytes, and the URI must target a
seed. Pollen releases the caller's WASM instance as soon as the marker
is observed, then routes the target seed and returns its output to the
original caller.
Pollen has three tiers. Pick the smallest that does the job.
- Leaf (default): can call workloads and connect to services. Cannot publish, cannot delegate.
- Publisher (
--publisher): can publish workloads, services, blobs, and static sites. Cannot delegate further. - Admin (
--admin): everything publisher can do, plus admit and grant new peers. Only the root admin can mint other admins.
# Grant publisher capability to an existing peer:
pln grant <peer-id> --publisher
# Delegate admin authority; useful for keeping the mesh operable
# (admissions, cert re-issues) with the root node offline:
pln grant <peer-id> --admin
# Bake arbitrary key/value properties into a peer's cert. Seeds see
# the caller's peer key and properties on every invocation, so auth,
# routing, and policy decisions can live inside the workload:
pln grant <peer-id> --prop role=lead --prop team=backend
# Or bake them in at join time, on either path:
pln invite --publisher --prop role=engineer --prop team=backend
pln bootstrap ssh root@host --admin --prop region=eu --prop tier=edge
# Pipe a JSON payload from a file:
cat props.json | pln grant <peer-id> --prop -
# Set the root node's own properties at init time (or replace
# them later with `pln props`):
pln init --prop role=primary --prop region=eu
pln props role=primary region=eu # replace
pln props --clear # wipe# Require a property on the caller's cert; repeatable, all
# clauses must match:
pln serve 8080 internal --allow-prop team=backend
pln seed ./hello.wasm --allow-prop role=leadPolicy clauses ride on the spec's signed cert. The runtime gate
evaluates them against the caller's delegation-cert properties on
every invoke, fetch, or connect; failed matches close the stream.
Disjunction lives at issuance: if you want "editors or admins",
mint both with pln grant --prop tier=privileged and require
tier=privileged on the spec. Without a flag the resource is
open to any authenticated peer.
# On each node that should serve HTTP. Port is optional;
# defaults to :8080. `restart` to apply.
pln set static-http # or `pln set static-http 9000`
# From any node:
pln seed ./public my-site
# Fetch via any serving node:
curl -H "Host: my-site" http://<node-addr>:8080/pln seed on a directory hashes every file into the local
content-addressed store and publishes the site under <name>. Other
nodes replicate the files and serve the site themselves. Each node's
HTTP listener routes requests by Host header to the matching site.
# From any node:
pln seed ./big-file.bin # prints sha-256 digest
pln seed ./big-file.bin payload # …or publish under a name
# From any other node:
pln fetch <digest|name> ./out.bin # streams plaintext from the publisher to ./out.binBlobs are the primitive behind static sites: content-addressed, gossip-advertised, streamed peer-to-peer over QUIC. Receivers verify the digest on arrival. Bytes are encrypted at rest, so
pln fetchis the export path — it never writes the encrypted form to your local store.
-
macOS:
sendmsg: no route to hoston LAN dialsMost likely macOS Local Network Privacy. Grant
plnaccess in System Settings → Privacy & Security → Local Network. The prompt appears the first timeplntries to reach a LAN peer; if you miss it, or the binary's signature changes after an upgrade, LAN dials silently fail while WAN traffic keeps working. Re-granting access fixes it.
Licensed under the Apache License, Version 2.0.