Skip to content

Mesh

Pipebase ships with optional message-bus services that flows can publish to and subscribe from. Use them to decouple a flow on one runtime from a consumer on another.

  • Kafka (pipebase-kafka) — Apache Kafka 3.8 in single-node KRaft mode. Reachable as pipebase-kafka:9092 from inside the coolify network. Auto-creates topics on first publish.
  • Pulsar (pipebase-pulsar) — Apache Pulsar 3.3 in single-node standalone mode. Reachable as pipebase-pulsar:6650 (binary) and :8080 (admin REST). Powers the cross-runtime mesh fabric.
  • Pulsar second broker (pipebase-pulsar-east) — opt-in geo- replication topology. Off by default.
Terminal window
docker compose --profile mesh up -d # both Kafka + Pulsar
docker compose --profile kafka up -d # Kafka only
docker compose --profile pulsar up -d # Pulsar only
docker compose --profile multi-broker up -d # adds pipebase-pulsar-east

The default docker compose up -d does NOT start them — they cost ~512 MB - 1 GB of RAM each, so a minimal install fits in 4 GB.

Use caseUse
Decouple producer + consumer flowsEither
High throughput (>10k msg/s)Kafka
Geo-replicationPulsar (multi-broker)
Topic-level RBACPulsar
Schema registryKafka (built-in)
Tiered storage to S3Pulsar

Kafka — producer:

- to: kafka:orders.created

Kafka — consumer:

- from:
uri: kafka:orders.created
steps:
- log:
message: "Got order ${body}"

Pulsar:

- to: pulsar:persistent://public/default/events
- from:
uri: pulsar:persistent://public/default/events
parameters:
subscriptionName: pipebase-consumer

v1 has no auth on the brokers. They sit on the internal coolify Docker network and aren’t exposed to the public. If you put Pipebase on a multi-tenant host, this is a hard “needs more work” boundary — Pulsar + Kafka don’t ship with credentials by default in our compose.

  • Runtimes — runtimes that consume from / publish to mesh topics