Publishing Events
Modules publish events via kernel.events().publish(). The SDK handles
serialisation, idempotency key generation, tenantId injection, and
Kafka producer retry. Module code never interacts with Kafka, RabbitMQ,
or Protobuf directly.
Publish a Single Event
import { kernel } from '@platform/sdk-core';
await kernel.events().publish('crm.contact.created', {
contactId: '01j9pa5mz700000000000000',
name: 'Alice Chen',
tenantId: '01j9p3kz5f00000000000000', // optional — Gateway always overwrites
});
The SDK generates a UUID v7 idempotency key automatically, attaches
tenantId from the current JWT context, and publishes the event to
the appropriate Kafka topic.
Full SDK signature
kernel.events().publish(
type: string, // event type, e.g. 'crm.contact.created'
data: Record<string, unknown>, // event payload
options?: {
idempotencyKey?: string; // override auto-generated UUID v7
schemaVersion?: string; // semver, e.g. '1.0.0' (default: '1.0.0')
}
): Promise<{ eventId: string }>
Publish a Batch
Publishing many events in a single call reduces network round-trips and allows Kafka to batch them into a single produce request:
const results = await kernel.events().publishBatch([
{
type: 'crm.contact.created',
data: { contactId: '01j9pa5mz700000000000000', name: 'Alice Chen' },
},
{
type: 'crm.deal.won',
data: { dealId: '01j9padd1000000000000000', amount: 75000 },
},
]);
// results: [{ eventId: '...' }, { eventId: '...' }]
Batch publishing is atomic at the Kafka level for events going to the same topic-partition. Events for different partitions may be delivered in separate batches internally.
REST Endpoint
POST https://api.septemcore.com/v1/events/publish
Authorization: Bearer <access_token>
Content-Type: application/json
Idempotency-Key: 01j9pa3kx200000000000000
{
"type": "crm.contact.created",
"data": {
"contactId": "01j9pa5mz700000000000000",
"name": "Alice Chen"
},
"schemaVersion": "1.0.0"
}
Response 201 Created:
{
"eventId": "01j9pa9ev300000000000000",
"type": "crm.contact.created",
"topic": "platform.data.events",
"partition": 3,
"offset": 128471
}
Publish RBAC
A module can only publish events it has declared in
module.manifest.json under events.publishes[]. This is enforced
by the API Gateway before the event reaches Kafka.
{
"events": {
"publishes": [
"crm.contact.created",
"crm.deal.won"
]
}
}
| Publish attempt | Result |
|---|---|
crm.contact.created (declared) | ✅ 201 Created |
crm.invoice.sent (not declared) | ❌ 403 Forbidden |
Module with no events.publishes | ❌ All publishes blocked |
Kernel Event Whitelist
Kernel events belong to the core platform services. No module can publish them — the producer whitelist is hardcoded in the API Gateway:
| Event namespace | Allowed publishers |
|---|---|
auth.* | IAM service only |
money.* | Money Service only |
billing.* | Billing service only |
audit.* | Audit Service only |
module.registry.* | Module Registry only |
Attempting to publish a kernel-namespaced event from a module returns:
{
"type": "https://api.septemcore.com/problems/forbidden",
"status": 403,
"detail": "Module 'crm' is not permitted to publish kernel event 'auth.user.created'.",
"code": "KERNEL_EVENT_PUBLISH_FORBIDDEN"
}
tenantId Injection
tenantId is always set by the API Gateway from the request JWT.
Even if the module includes a tenantId field in the event payload,
the Gateway overwrites it with the authenticated value. This makes
tenant cross-contamination structurally impossible.
The flow:
Module code: publish('crm.contact.created', { contactId: '...' })
│
▼
API Gateway extracts tenantId from JWT
Injects into event envelope (not payload)
│
▼
Kafka message:
{ id, type, source, tenantId, data, schemaVersion, timestamp, traceId }
Idempotency
The SDK auto-generates a UUID v7 idempotency key for every publish
call. When using the REST API, supply the Idempotency-Key header
manually. If the same key is received again within 24 hours, the Event
Bus returns the original response without a second Kafka produce:
First call: Idempotency-Key: 01j9pa3kx200000000000000 → 201 Created
Second call: Idempotency-Key: 01j9pa3kx200000000000000 → 200 OK (cached response)
Idempotency keys are stored in Valkey with a 24-hour TTL. This makes publish calls safe to retry after a network timeout.
Broker Unavailability
If Kafka is unavailable when publish() is called, the SDK returns
an error immediately:
try {
await kernel.events().publish('crm.contact.created', payload);
} catch (err) {
if (err.code === 'events.broker.unavailable') {
// Kafka is down — decide: retry with backoff, or log and continue
await scheduleRetry(payload);
}
throw err;
}
The SDK does not buffer events internally for Kafka-down scenarios — the module is responsible for deciding between retry, dead-letter, or silent drop based on business criticality.
Error Reference
| Scenario | HTTP | Code |
|---|---|---|
Event type not in publishes[] | 403 | EVENT_PUBLISH_FORBIDDEN |
| Kernel event published by module | 403 | KERNEL_EVENT_PUBLISH_FORBIDDEN |
| Payload exceeds 1 MB | 400 | EVENT_PAYLOAD_TOO_LARGE |
| Invalid schema version format | 400 | INVALID_SCHEMA_VERSION |
| Kafka unavailable | 503 | events.broker.unavailable |
| Duplicate idempotency key | 200 | — (cached response returned) |