Skip to main content

Event Bus — Overview

The Event Bus is the asynchronous communication backbone of the platform. Modules never call each other directly — they publish events and subscribe to events through the Event Bus. This decouples modules, enables replay, and guarantees that every cross-module side effect is auditable.

The SDK exposes the Event Bus via kernel.events(). The underlying broker and transport are abstracted — module code never imports Kafka or RabbitMQ packages directly.


Three Transport Layers

The platform uses three distinct transports, each optimised for a different communication pattern:

TransportTechnologyUse caseLatency
Pub/SubApache Kafka (segmentio/kafka-go)Domain events: clicks, conversions, payments, lifecycle< 50 ms
Transactional tasksRabbitMQ (rabbitmq/amqp091-go)Notifications, payouts — exactly-once semantics< 100 ms
Browser eventsDOM CustomEvent + event-bridgeMFE → MFE within the UI Shell< 5 ms

Kafka is the primary bus for domain events. RabbitMQ handles tasks that require stronger delivery guarantees and acknowledgement semantics. Browser CustomEvents are entirely client-side and never reach the backend.


Event-level RBAC via Manifest

Module events are opt-in at both ends: a module must explicitly declare which events it publishes and which it subscribes to in module.manifest.json. Absence of a declaration is a zero-permission default.

{
"name": "@acme/crm",
"version": "1.2.0",
"events": {
"publishes": [
"crm.contact.created",
"crm.deal.won"
],
"subscribes": [
"auth.user.created",
"billing.subscription.changed"
]
}
}
AttemptResult
Publish an event not in publishes[]403 Forbidden
Subscribe to an event not in subscribes[]403 Forbidden
Module with no events blockZero publishes, zero subscribes
Publish a kernel event (auth.*, money.*, billing.*)403 Forbidden — kernel events are whitelist-only

Event RBAC is enforced by the API Gateway on every publish() call and by the SDK on every subscribe() call. It cannot be bypassed by module code.


Tenant Isolation

All events are tenant-scoped. The tenantId is injected by the API Gateway from the request JWT when a module publishes an event — the module cannot set tenantId itself.

On the subscriber side, kernel.events().subscribe() automatically applies a filter so that a subscriber only receives events that belong to its own tenant. A module running in tenant A never sees events published by tenant B, even though both share the same Kafka topic.


8 Kafka Topics

The platform uses one topic per bounded context — a practised industry standard that balances ordering guarantees with manageability (tens of topics vs. millions):

TopicDomainExample events
platform.auth.eventsIAMauth.user.created, auth.user.logged_in, auth.role.changed
platform.module.eventsModule Registrymodule.registry.registered, module.registry.activated
platform.money.eventsMoney Servicemoney.wallet.credited, money.wallet.debited
platform.files.eventsFile Storagefiles.file.uploaded
platform.notify.eventsNotify Servicenotify.notification.sent
platform.audit.eventsAudit Serviceaudit.record.created
platform.billing.eventsBillingbilling.plan.changed, billing.subscription.changed
platform.data.eventsData Layer (CDC)Change-data-capture events from Debezium

Module-specific events (e.g. crm.contact.created) are routed through the same platform.data.events topic or a dedicated module topic, depending on the module manifest configuration.


Communication Patterns

PatternTechnologyDirectionNotes
Pub/SubKafkaBackend → BackendDurable, replayable, ordered per entity
Browser CustomEventsDOM APIFrontend → FrontendZero-latency MFE communication within shell
Zustand shared stateZustand singletonFrontendFramework-agnostic state shared across MFEs
WebSocket bridgeWebSocket + Event BusBackend → FrontendReal-time updates: live statistics, balances
Transactional tasksRabbitMQBackend → BackendNotification delivery, payment processing

Dead Letter Queue

If a subscriber handler fails 3 consecutive times on the same event (poison message), the SDK moves the event to the dead-letter topic platform.{domain}.dlq. The consumer continues processing the next events. Stuck events do not block the pipeline.

Dead-letter events are visible in Admin → Events → DLQ. A Platform Owner can retry them individually (POST /api/v1/events/dlq/:id/retry) or in bulk with throttling (POST /api/v1/events/dlq/replay-all at 50 events/sec by default, configurable via DLQ_REPLAY_RATE_LIMIT).

Full details:

  • Publishing Eventspublish(), idempotency, RBAC, kernel event whitelist
  • Subscribing to Eventssubscribe(), consumer groups, replay, offset tracking
  • Event Model — event schema fields, Protobuf, versioning (Batch 12)
  • Event Catalog — all kernel events with schemas (Batch 12)
  • Broker Resilience — Kafka/RabbitMQ failure modes (Batch 13)
  • Kafka Topics — naming, partitions, retention (Batch 13)