@platform/sdk-events
@platform/sdk-events gives modules access to the Platform Event Bus.
Domain events travel over Apache Kafka (pub/sub, replayable logs).
Transactional tasks (notifications, payment triggers) travel over
RabbitMQ. Browser-to-browser MFE events use the native
CustomEvent API. The SDK abstracts all three transports.
Installation
pnpm add @platform/sdk-events
RBAC — Manifest Declaration Required
A module may only publish and subscribe to events declared in its
module.manifest.json. Attempting to access an undeclared event
returns 403 Forbidden:
{
"name": "@scope/crm-module",
"events": {
"publishes": [
"crm.contact.created",
"crm.deal.closed"
],
"subscribes": [
"auth.user.created",
"billing.subscription.changed"
]
}
}
Kernel events (auth.*, money.*, billing.*) can only be
published by kernel services — modules may subscribe to them but
not publish them. Missing events block in manifest = zero
publish + zero subscribe permissions (restrictive default).
publish()
Publish a domain event to the tenant-scoped Kafka topic:
import { kernel } from '@platform/sdk-core';
await kernel.events().publish({
type: 'crm.contact.created',
payload: {
contactId: '01j9pcont000000000000001',
source: 'web-form',
},
});
// Returns: { eventId: 'uuid-v7', topic: 'platform.crm.events' }
The Gateway automatically injects tenantId into every event from
the caller's JWT — modules cannot spoof this field.
Event Model
| Field | Type | Description |
|---|---|---|
id | UUID v7 | Unique event identifier (time-sortable, idempotency key) |
type | string | Event name (crm.contact.created) |
source | string | Module ID of the publisher |
tenantId | string | Injected by Gateway from JWT |
payload | object | Event data (max 1 MB) |
schemaVersion | string | Semver of the payload schema |
timestamp | string | ISO 8601 UTC |
traceId | string | OpenTelemetry trace ID for cross-service correlation |
Kafka Topic Routing
Events are routed by domain, not by event type. Partition key is
entityId — all events for the same entity land on the same partition,
guaranteeing ordered delivery per entity:
| Topic | Domain |
|---|---|
platform.auth.events | IAM events |
platform.money.events | Money / wallet events |
platform.files.events | File storage events |
platform.notify.events | Notification events |
platform.audit.events | Audit records (30-day retention) |
platform.billing.events | Billing / subscription events |
platform.{module}.events | Module-specific events |
Default Kafka retention: 7 days (KAFKA_RETENTION_HOURS=168).
The platform.audit.events topic uses 30 days to support compliance
workloads.
publish() When Kafka Is Unavailable
Kafka down:
publish() → throws PlatformError { type: 'events.broker.unavailable' }
Module decides: retry manually or ignore (fire-and-forget events)
RabbitMQ down (transactional tasks):
Same behaviour — SDK throws, module handles
For critical events that must not be lost, wrap publish() in a
retry loop with exponential backoff, or use the
Integration Hub for outbound reliability.
subscribe()
Subscribe to a Kafka topic and process events. The SDK manages consumer group lifecycle, offset tracking, and reconnection:
import { kernel } from '@platform/sdk-core';
kernel.events().subscribe('auth.user.created', async (event) => {
const { userId, email, tenantId } = event.payload;
await createWelcomeContact(userId, email);
// Return: void (success) or throw (triggers retry / DLQ)
});
The SDK uses consumer group {moduleId}.{handler-name}, guaranteeing
exactly one module instance processes each event.
Built-In Dead Letter Queue (DLQ)
If the handler throws 3 consecutive times for the same event (poison
message), the SDK automatically routes it to the dead-letter topic
platform.{domain}.dlq and continues processing the next event:
Event arrives → handler throws (attempt 1)
→ handler throws (attempt 2)
→ handler throws (attempt 3 = poison)
→ event → platform.crm.dlq (dead-letter topic)
→ handler continues with next event (no blocking)
DLQ events are visible in the Admin UI. Manual retry:
POST https://api.septemcore.com/v1/events/dlq/{id}/retry
Authorization: Bearer <access_token>
Bulk retry (throttled at 50 events/sec to prevent re-flooding):
POST https://api.septemcore.com/v1/events/dlq/replay-all
Response: { "replayed": 42, "skipped_already_resolved": 8 }
Idempotency for financial events: Consumers for
money.*andbilling.*events must use permanent PostgreSQL deduplication (processed_event_idstable), not Valkey TTL. DLQ events can be older than 24 hours, making Valkey TTL-based dedup unreliable.
Replay (Replay from Timestamp)
Re-read historical events from a specific point in time. Kafka retains events for 7 days by default:
await kernel.events().replayFrom({
topic: 'auth.user.created',
since: '2026-04-20T00:00:00Z', // ISO 8601
handler: async (event) => {
await reprocessLegacyUser(event.payload);
},
});
// Throws: PlatformError { type: 'events.retention.exceeded' }
// if 'since' is older than KAFKA_RETENTION_HOURS
Consumer Lag Monitoring
| Lag duration | Alert level | Action |
|---|---|---|
| > 1 minute | info | Logged only |
| > 5 minutes | warning | Notify alert sent to platform ops |
| > 15 minutes | critical | Notify + PagerDuty |
| Bulk import detected | Suppressed | Lag warnings suppressed for 30 minutes during bulk data spikes |
onEvent() — Browser Custom Events
For MFE-to-MFE communication within the same shell (microsecond
latency, no network), use browser CustomEvent:
import { kernel } from '@platform/sdk-core';
// Publisher MFE: emit a browser event
kernel.events().emit('cart.item.added', {
productId: 'prod-abc',
quantity: 2,
});
// Subscriber MFE: listen for browser events
const unsubscribe = kernel.events().onEvent('cart.item.added', (payload) => {
updateCartBadge(payload.quantity);
});
// Clean up on component unmount
return () => unsubscribe();
Ordering guarantee: Browser
CustomEventdoes NOT guarantee delivery order. If order matters (e.g., balance updates, state machines), use Kafka-backedsubscribe()— Kafka guarantees per-entity ordering via partition key (entityId).
Core Platform Events Catalog
| Event | Published by | Payload |
|---|---|---|
auth.user.created | IAM | { userId, email, tenantId, roles } |
auth.user.logged_in | IAM | { userId, tenantId, ip, userAgent } |
auth.role.changed | IAM | { userId, oldRoles, newRoles } |
money.wallet.credited | Money Service | { txId, userId, amountCents, currency } |
money.wallet.debited | Money Service | { txId, userId, amountCents, currency } |
files.file.uploaded | File Storage | { fileId, userId, bucket, key, size } |
notify.notification.sent | Notify Service | { notificationId, channel, userId } |
billing.plan.changed | Billing | { tenantId, oldPlan, newPlan } |
billing.subscription.changed | Billing | { tenantId, oldStatus, newStatus } |
module.registry.activated | Module Registry | { moduleId } |
Modules may subscribe to any of these. Modules may never publish
auth.*, money.*, or billing.* — these are kernel-reserved.