Event Bus — Overview
The Event Bus is the asynchronous communication backbone of the platform. Modules never call each other directly — they publish events and subscribe to events through the Event Bus. This decouples modules, enables replay, and guarantees that every cross-module side effect is auditable.
The SDK exposes the Event Bus via kernel.events(). The underlying
broker and transport are abstracted — module code never imports Kafka
or RabbitMQ packages directly.
Three Transport Layers
The platform uses three distinct transports, each optimised for a different communication pattern:
| Transport | Technology | Use case | Latency |
|---|---|---|---|
| Pub/Sub | Apache Kafka (segmentio/kafka-go) | Domain events: clicks, conversions, payments, lifecycle | < 50 ms |
| Transactional tasks | RabbitMQ (rabbitmq/amqp091-go) | Notifications, payouts — exactly-once semantics | < 100 ms |
| Browser events | DOM CustomEvent + event-bridge | MFE → MFE within the UI Shell | < 5 ms |
Kafka is the primary bus for domain events. RabbitMQ handles tasks that require stronger delivery guarantees and acknowledgement semantics. Browser CustomEvents are entirely client-side and never reach the backend.
Event-level RBAC via Manifest
Module events are opt-in at both ends: a module must explicitly
declare which events it publishes and which it subscribes to in
module.manifest.json. Absence of a declaration is a zero-permission
default.
{
"name": "@acme/crm",
"version": "1.2.0",
"events": {
"publishes": [
"crm.contact.created",
"crm.deal.won"
],
"subscribes": [
"auth.user.created",
"billing.subscription.changed"
]
}
}
| Attempt | Result |
|---|---|
Publish an event not in publishes[] | 403 Forbidden |
Subscribe to an event not in subscribes[] | 403 Forbidden |
Module with no events block | Zero publishes, zero subscribes |
Publish a kernel event (auth.*, money.*, billing.*) | 403 Forbidden — kernel events are whitelist-only |
Event RBAC is enforced by the API Gateway on every publish() call
and by the SDK on every subscribe() call. It cannot be bypassed by
module code.
Tenant Isolation
All events are tenant-scoped. The tenantId is injected by the API
Gateway from the request JWT when a module publishes an event — the
module cannot set tenantId itself.
On the subscriber side, kernel.events().subscribe() automatically
applies a filter so that a subscriber only receives events that
belong to its own tenant. A module running in tenant A never sees
events published by tenant B, even though both share the same Kafka
topic.
8 Kafka Topics
The platform uses one topic per bounded context — a practised industry standard that balances ordering guarantees with manageability (tens of topics vs. millions):
| Topic | Domain | Example events |
|---|---|---|
platform.auth.events | IAM | auth.user.created, auth.user.logged_in, auth.role.changed |
platform.module.events | Module Registry | module.registry.registered, module.registry.activated |
platform.money.events | Money Service | money.wallet.credited, money.wallet.debited |
platform.files.events | File Storage | files.file.uploaded |
platform.notify.events | Notify Service | notify.notification.sent |
platform.audit.events | Audit Service | audit.record.created |
platform.billing.events | Billing | billing.plan.changed, billing.subscription.changed |
platform.data.events | Data Layer (CDC) | Change-data-capture events from Debezium |
Module-specific events (e.g. crm.contact.created) are routed through
the same platform.data.events topic or a dedicated module topic,
depending on the module manifest configuration.
Communication Patterns
| Pattern | Technology | Direction | Notes |
|---|---|---|---|
| Pub/Sub | Kafka | Backend → Backend | Durable, replayable, ordered per entity |
| Browser CustomEvents | DOM API | Frontend → Frontend | Zero-latency MFE communication within shell |
| Zustand shared state | Zustand singleton | Frontend | Framework-agnostic state shared across MFEs |
| WebSocket bridge | WebSocket + Event Bus | Backend → Frontend | Real-time updates: live statistics, balances |
| Transactional tasks | RabbitMQ | Backend → Backend | Notification delivery, payment processing |
Dead Letter Queue
If a subscriber handler fails 3 consecutive times on the same event
(poison message), the SDK moves the event to the dead-letter topic
platform.{domain}.dlq. The consumer continues processing the next
events. Stuck events do not block the pipeline.
Dead-letter events are visible in Admin → Events → DLQ. A Platform
Owner can retry them individually (POST /api/v1/events/dlq/:id/retry)
or in bulk with throttling (POST /api/v1/events/dlq/replay-all at
50 events/sec by default, configurable via DLQ_REPLAY_RATE_LIMIT).
Full details:
- Publishing Events —
publish(), idempotency, RBAC, kernel event whitelist - Subscribing to Events —
subscribe(), consumer groups, replay, offset tracking - Event Model — event schema fields, Protobuf, versioning (Batch 12)
- Event Catalog — all kernel events with schemas (Batch 12)
- Broker Resilience — Kafka/RabbitMQ failure modes (Batch 13)
- Kafka Topics — naming, partitions, retention (Batch 13)