Event Bus REST API Reference
The Event Bus is the asynchronous backbone of the platform. It provides Pub/Sub via Apache Kafka for domain events (clicks, conversions, payouts) and RabbitMQ for transactional tasks (notifications, payments). All event schemas are defined in Protobuf and enforced via Buf CLI backward-compatibility checks on every PR.
See the REST API Overview for authentication, error format, pagination, and rate limiting.
For layout brevity, the /api/v1 base path prefix is omitted from the
endpoint tables below.
Endpoints
| Endpoint | Auth | Description |
|---|---|---|
POST /events/publish | ✅ | Publish a single event |
POST /events/publish/batch | ✅ | Batch publish (up to 500 events) |
GET /events/topics | ✅ | List available topics |
GET /events/topics/:topic/schema | ✅ | Topic Protobuf schema + version |
GET /events/topics/:topic/config | ✅ | Topic config (retention, partitions) |
GET /events/subscriptions | ✅ | List active subscriptions for module |
POST /events/subscriptions | ✅ | Register a subscription |
DELETE /events/subscriptions/:id | ✅ | Unsubscribe |
GET /events/subscriptions/:id/position | ✅ | Consumer position (offset + lag) |
POST /events/subscriptions/:id/seek | ✅ | Replay — seek to timestamp |
GET /events/dlq | ✅ | Dead Letter Queue — list failed |
GET /events/dlq/:id | ✅ | DLQ event details |
POST /events/dlq/:id/retry | ✅ | Retry a single DLQ event |
POST /events/dlq/replay-all | ✅ | Replay all pending DLQ events |
DELETE /events/dlq/:id | ✅ | Mark DLQ event as resolved |
POST /api/v1/events/publish — Publish Event
POST https://api.septemcore.com/v1/events/publish
Authorization: Bearer <access_token>
Content-Type: application/json
{
"topic": "platform.module.events",
"type": "module.registry.activated",
"payload": { "moduleId": "crm-v2" },
"schemaVersion": "1.0.0"
}
Response 202 Accepted (fire-and-forget — Kafka publish is async):
{
"eventId": "01j9pevt0000000000000001",
"topic": "platform.module.events",
"type": "module.registry.activated",
"status": "queued",
"createdAt": "2026-04-22T04:10:00Z"
}
Event Object
| Field | Type | Description |
|---|---|---|
eventId | string (UUID v7) | Unique event ID — used for idempotency dedup |
type | string | Event type in domain.entity.verb format |
source | string | Publishing module ID (injected by Gateway, not trusted from client) |
tenantId | string | Injected by Gateway from JWT — cannot be set by caller |
payload | object | Event payload (max 1 MB; use Claim Check pattern for larger data) |
schemaVersion | string (semver) | Protobuf schema version |
traceId | string | OpenTelemetry trace ID |
timestamp | string (ISO 8601) | Event creation time |
Publish RBAC
A module can only publish events declared in manifest.events.publishes[].
Attempting to publish an undeclared event returns 403 Forbidden.
Kernel-owned events (auth.*, money.*, billing.*) can only be published
by kernel services — module attempts are always rejected.
POST /api/v1/events/publish/batch — Batch Publish
POST https://api.septemcore.com/v1/events/publish/batch
Authorization: Bearer <access_token>
Content-Type: application/json
{
"topic": "platform.module.events",
"events": [
{ "type": "module.registry.activated", "payload": { "moduleId": "crm-v2" } },
{ "type": "module.registry.deactivated", "payload": { "moduleId": "old-crm" } }
]
}
| Limit | Value |
|---|---|
| Max events per batch | 500 |
| Max payload per event | 1 MB |
| Exceeded | 400 Bad Request (problems/batch-limit-exceeded) |
GET /api/v1/events/topics — List Topics
Lists all Kafka topics available to the current module (filtered by manifest
events.subscribes[] declaration):
{
"data": [
{
"topic": "platform.auth.events",
"domain": "auth",
"partitions": 12,
"retention": "168h",
"examples": ["auth.user.created", "auth.user.logged_in"]
},
{
"topic": "platform.money.events",
"domain": "money",
"partitions": 12,
"retention": "168h",
"examples": ["money.wallet.credited", "money.wallet.debited"]
}
],
"meta": { "cursor": null, "hasMore": false }
}
Kafka Topic Naming Convention
| Parameter | Value |
|---|---|
| Pattern | platform.<domain>.events — one topic per bounded context |
| Partition key | entityId — all events for one entity land on the same partition (ordering guarantee) |
| Cross-entity ordering | Not guaranteed — events for entity A and B on different partitions may arrive out of relative order |
| Default retention | 7 days (KAFKA_RETENTION_HOURS=168) |
| Audit topic retention | 30 days (KAFKA_AUDIT_RETENTION_HOURS=720) — extended for compliance |
| Max payload | 1 MB (message.max.bytes) |
Platform Topics Catalogue
| Topic | Domain | Example events |
|---|---|---|
platform.auth.events | IAM | auth.user.created, auth.user.logged_in, auth.role.changed |
platform.module.events | Module Registry | module.registry.registered, module.registry.activated |
platform.money.events | Money Service | money.wallet.credited, money.wallet.debited |
platform.files.events | File Storage | files.file.uploaded |
platform.notify.events | Notify Service | notify.notification.sent |
platform.audit.events | Audit Service | audit.record.created |
platform.billing.events | Billing | billing.plan.changed, billing.subscription.changed |
platform.data.events | Data Layer (CDC) | CDC events via Debezium |
GET /api/v1/events/topics/:topic/schema — Topic Schema
Returns the current Protobuf schema for a topic and its semver version:
{
"topic": "platform.money.events",
"schemaVersion": "2.1.0",
"protoDefinition": "syntax = \"proto3\";\n\nmessage MoneyEvent {\n string event_id = 1;\n ...\n}",
"breakingChangesPolicy": "buf breaking (FILE) enforced on every PR"
}
POST /api/v1/events/subscriptions — Register Subscription
POST https://api.septemcore.com/v1/events/subscriptions
Authorization: Bearer <access_token>
Content-Type: application/json
{
"topic": "platform.money.events",
"consumerGroup": "notify.money-handler",
"handlerName": "money-handler"
}
Consumer groups follow the naming convention <service>.<handler-name>.
Subscribe RBAC
A module can only subscribe to topics declared in manifest.events.subscribes[].
Attempting to subscribe to an undeclared topic returns 403 Forbidden.
An empty (or absent) subscribes field in the manifest means zero allowed subscriptions.
GET /api/v1/events/subscriptions/:id/position — Consumer Position
{
"subscriptionId": "sub_01j9p000000001",
"topic": "platform.money.events",
"consumerGroup": "notify.money-handler",
"partitions": [
{ "partition": 0, "currentOffset": 10450, "logEndOffset": 10450, "lag": 0 },
{ "partition": 1, "currentOffset": 9200, "logEndOffset": 9250, "lag": 50 }
],
"totalLag": 50
}
Consumer Lag Alert Thresholds
| Lag level | Action |
|---|---|
> 1 min | info — logged only |
> 5 min | warning — Notify sent to tenant-admin |
> 15 min | critical — Notify + PagerDuty |
| Bulk-import spike detected | Warnings suppressed for 30 minutes to prevent false alerts |
POST /api/v1/events/subscriptions/:id/seek — Replay from Timestamp
Seeks a consumer group to a specific point in time, enabling event replay:
POST https://api.septemcore.com/v1/events/subscriptions/sub_01j9p000000001/seek
Authorization: Bearer <access_token>
Content-Type: application/json
{
"seekTo": "2026-04-20T00:00:00Z"
}
Response 202 Accepted.
Kafka stores events for
KAFKA_RETENTION_HOURS(default 168 h = 7 days). Seeking beyond the retention window returns422 Unprocessable Entity(problems/events.retention.exceeded).
Dead Letter Queue (DLQ)
Events that fail processing 3 times in a row (poison message) are moved
to the dead-letter topic platform.{domain}.dlq and removed from the main
consumer stream — the consumer continues processing subsequent events.
GET /api/v1/events/dlq — List DLQ Events
GET https://api.septemcore.com/v1/events/dlq
?topic=platform.money.events
&status=pending
&limit=20
Authorization: Bearer <access_token>
DLQ event object:
{
"id": "dlq_01j9p000000001",
"eventId": "01j9pevt0000000000000005",
"topic": "platform.money.events",
"type": "money.wallet.credited",
"payload": { "txId": "tx_001", "amountCents": 5000 },
"failureCount": 3,
"lastError": "consumer timeout after 30s",
"status": "pending",
"createdAt": "2026-04-21T10:00:00Z"
}
| DLQ status | Meaning |
|---|---|
pending | Failed, awaiting manual retry or replay-all |
resolved | Manually marked as resolved (not retried) |
retrying | Retry in progress |
POST /api/v1/events/dlq/:id/retry — Retry Single Event
POST https://api.septemcore.com/v1/events/dlq/dlq_01j9p000000001/retry
Authorization: Bearer <access_token>
Response 202 Accepted.
POST /api/v1/events/dlq/replay-all — Throttled Bulk Replay
Replays all pending DLQ events at a controlled rate:
POST https://api.septemcore.com/v1/events/dlq/replay-all
Authorization: Bearer <access_token>
Response 202 Accepted:
{
"replayed": 48,
"skipped_already_resolved": 12
}
| Parameter | Value |
|---|---|
| Replay rate | 50 events/sec (DLQ_REPLAY_RATE_LIMIT) |
| Scope | Only status=pending — resolved events are skipped |
| Idempotency | Financial events use permanent processed_event_ids in PostgreSQL (not Valkey TTL — DLQ events may be older than 24 h) |
Tenant Isolation
| Layer | Mechanism |
|---|---|
| Publish | Gateway injects tenantId from JWT — module cannot substitute another tenant |
| Subscribe | SDK automatically filters by tenantId from JWT — cross-tenant event access is impossible |
| Kafka topics | Shared topics for all tenants; partition key = entityId. Topic-per-tenant would create millions of topics — one topic per domain is the industry standard |
Broker Unavailability
| Situation | Behaviour |
|---|---|
Kafka down at publish() | Returns 503 Service Unavailable (problems/events.broker.unavailable). Module decides: retry or ignore |
Kafka down at subscribe() | SDK auto-reconnects. Consumer resumes from last committed offset on recovery |
| RabbitMQ down | Returns 503. Transactional tasks wait for recovery |
Error Reference
| Error type | Status | Trigger |
|---|---|---|
problems/events.broker.unavailable | 503 | Kafka or RabbitMQ is unreachable |
problems/events.publish.forbidden | 403 | Module publishing an event not in manifest.events.publishes[] |
problems/events.subscribe.forbidden | 403 | Module subscribing to topic not in manifest.events.subscribes[] |
problems/events.retention.exceeded | 422 | Seek target timestamp is older than Kafka retention window |
problems/batch-limit-exceeded | 400 | Batch publish exceeds 500 events |
problems/topic-not-found | 404 | Topic does not exist |
problems/subscription-not-found | 404 | Subscription ID does not exist |