@platform/sdk-audit
@platform/sdk-audit provides append-only, immutable audit logging
for any module action. Every call to record() is non-blocking —
the operation that triggered the audit record never waits for the write
to complete. The audit pipeline uses dual-write (Kafka primary +
PostgreSQL WAL fallback) to guarantee zero record loss.
Installation
pnpm add @platform/sdk-audit
record()
Write a single audit record. This call is always fire-and-forget — the calling service is never blocked, even if Kafka is temporarily unavailable:
import { kernel } from '@platform/sdk-core';
await kernel.audit().record({
action: 'contact.updated',
entityType: 'contact',
entityId: '01j9pcont000000000000001',
description: 'Email address updated',
before: {
},
after: {
},
metadata: {
source: 'profile-settings-form',
},
});
// Returns: void — call resolves once routed to Kafka (or WAL fallback).
// The audit record does NOT need to complete before the business operation returns.
The SDK automatically injects
userId,tenantId,ip,userAgent, andtimestampfrom the current request context (JWT + HTTP headers). Module authors supply only business-relevant fields.
Audit Record Model
| Field | Type | Description |
|---|---|---|
action | string | Event name: contact.updated, payout.approved, user.login |
entityType | string | Resource type: user, wallet, contact, module |
entityId | string | ULID of the resource |
userId | string | Injected automatically from JWT |
tenantId | string | Injected automatically from JWT |
ip | string | Injected automatically from request headers |
userAgent | string | Injected automatically from request headers |
before | object | State snapshot before the change |
after | object | State snapshot after the change |
description | string | Human-readable summary for audit UI |
metadata | object | Arbitrary key-value context |
timestamp | string | ISO 8601 UTC — set by the service, immutable |
recordBatch()
Write multiple audit records in a single call. Useful for bulk operations where multiple entities change atomically:
await kernel.audit().recordBatch([
{
action: 'role.permission.added',
entityType: 'role',
entityId: '01j9prole000000000000001',
before: { permissions: ['crm.read'] },
after: { permissions: ['crm.read', 'crm.write'] },
},
{
action: 'user.role.assigned',
entityType: 'user',
entityId: '01j9pusr0000000000000001',
before: { roles: ['viewer'] },
after: { roles: ['viewer', 'editor'] },
},
]);
What Must Be Audited
All of the following are automatically audited by platform kernel services. Module authors must additionally audit their own business-critical operations:
| Category | Examples |
|---|---|
| All financial transactions | credit, debit, hold, reversal, payout |
| All admin actions | permission changes, account suspension |
| All API calls | webhooks, postbacks, integrations |
| All settings changes | RBAC, billing plan changes |
| All tracking events | clicks, registrations, first-time deposits |
| All config changes | feature flags, module registry changes |
query()
Search audit records with filters. Backed by ClickHouse for sub-millisecond query performance on billions of rows:
const results = await kernel.audit().query({
filters: {
action: 'contact.updated',
entityType: 'contact',
userId: '01j9pusr0000000000000001',
from: '2026-04-01T00:00:00Z',
to: '2026-04-30T23:59:59Z',
},
orderBy: 'timestamp',
direction: 'desc',
limit: 50,
cursor: undefined, // cursor-based pagination (CONVENTIONS §4)
});
// results: { data: AuditRecord[], meta: { cursor, hasMore, total } }
Hot vs Cold data: Records from the last 90 days are served instantly from ClickHouse hot storage. Records from 91 days to 7 years are in S3 Glacier cold storage and require 3–12 hours to restore. The Admin UI shows a "Cold archive" badge when the query spans cold data.
getEntityHistory()
Get the full chronological audit trail for a specific entity — all changes made to a contact, wallet, user, or any other resource:
const history = await kernel.audit().getEntityHistory({
entityType: 'wallet',
entityId: '01j9pwal0000000000000001',
limit: 25,
});
// Returns all audit records for this wallet, newest first.
// history.data[0]: { action: 'money.wallet.debited', before: {...}, after: {...}, ... }
exportRecords()
Export audit records as JSON or CSV. The export waits for any pending GDPR anonymizations to complete before building the file (GDPR-compliant export guarantee):
const exportJob = await kernel.audit().exportRecords({
format: 'csv', // 'json' | 'csv'
filters: {
from: '2026-01-01T00:00:00Z',
to: '2026-03-31T23:59:59Z',
entityType: 'wallet',
},
});
// exportJob: { jobId: 'uuid', status: 'processing' }
// Poll for completion:
const status = await kernel.audit().getExportStatus(exportJob.jobId);
// { jobId, status: 'completed', downloadUrl: 'https://...', expiresAt: '...' }
HTTP equivalent:
GET https://api.septemcore.com/v1/audit/export?format=csv&from=2026-01-01T00:00:00Z&to=2026-03-31T23:59:59Z
Authorization: Bearer <access_token>
Write Pipeline — Dual-Write Guarantee
The audit write pipeline guarantees zero record loss even during infrastructure failures:
Primary path: Service → Kafka (platform.audit.events) → ClickHouse consumer → ClickHouse
Fallback path: Kafka unavailable → PostgreSQL audit_wal table → Background replay → Kafka
| Step | Behaviour |
|---|---|
| 1. Service publishes | Async Kafka publish — does not block business operation |
| 2. Kafka healthy | Consumer batches records into ClickHouse |
| 3. ClickHouse down | Kafka retains events (up to 30 days for platform.audit.events) |
| 4. Kafka down | Fallback: audit record written to PostgreSQL audit_wal. Background goroutine (30s ticker) replays WAL → Kafka on recovery |
| 5. Both down | WAL accumulates in PostgreSQL. On any recovery → full replay. Zero loss guaranteed. |
Business operation isolation: If
kernel.audit().record()itself fails before routing, it silently retries via WAL — the business operation (money transfer, user update) completes and returns200 OKregardless. SOX/PCI-DSS: 100% audit trail.
Retention Policy
| Period | Storage | Access |
|---|---|---|
| 0–90 days | ClickHouse (hot) | Instant (milliseconds) — Admin UI, API query |
| 91 days – 7 years | S3 Glacier (cold) | On request — 3–12 hours restoration time |
| After 7 years | Deleted | No access |
Hot/cold transition uses ClickHouse native TTL:
TTL toDate(timestamp) + INTERVAL 90 DAY TO VOLUME 's3_cold'
Retention period is required by AML/KYC compliance (same 7-year standard as Money Service transaction retention).
GDPR Anonymization
Audit records are immutable — originals are never modified or
deleted. GDPR compliance is achieved by appending an ANONYMIZE
record in ClickHouse's ReplacingMergeTree:
// Admin-only operation (requires permission: audit.anonymize)
await kernel.audit().anonymizeUser('01j9pusr0000000000000001');
// Appends ANONYMIZE record: email → [REDACTED], ip → 0.0.0.0, name → [REDACTED]
// Financial records (money.*) are NEVER anonymized (AML compliance)
All reads use SELECT ... FINAL (ClickHouse ReplacingMergeTree
modifier), which returns only the latest version of each
record — the anonymized one:
Without FINAL: ClickHouse may return BOTH original AND anonymized rows
(merge is a background process)
With FINAL: Only the latest version (anonymized) is returned
Cold storage (S3 Glacier) anonymization: a monthly background job
restores segments from Glacier, applies the anonymization_log, and
writes the clean version back to Glacier.