Skip to main content

Caching (Valkey)

The Data Layer exposes Valkey (the open-source Redis fork) as a distributed cache through kernel.data().cache(). The same Valkey cluster is used internally by the platform for rate-limiting, session storage, deduplication, and RBAC permission caching — modules access their own keyspace through the SDK without any collision risk.


Key Namespacing

Module cache keys are automatically namespaced to prevent collisions between modules or tenants:

Cache key formula: {tenantId}:{moduleId}:{userKey}

Example:
tenantId = 01j9p3kz5f00000000000000
moduleId = crm
userKey = contacts:summary

Stored in Valkey as:
01j9p3kz5f00000000000000:crm:contacts:summary

The SDK handles namespacing internally. Module code only specifies the userKey portion:

await kernel.data().cache().set('contacts:summary', data, { ttl: 300 });
// Stored as: {tenantId}:{moduleId}:contacts:summary

Operations

get

Reads a value by key. Returns null when the key does not exist or has expired.

const cached = await kernel.data().cache().get('contacts:summary');
if (cached !== null) {
return cached;
}
// Cache miss → fetch from DB → store in cache
const fresh = await kernel.data().list('contacts', { ... });
await kernel.data().cache().set('contacts:summary', fresh, { ttl: 300 });
return fresh;

set

Stores a value with an optional TTL (time-to-live in seconds). When ttl is omitted the key persists until explicitly deleted or until the Valkey process restarts (not recommended for production — always set a TTL).

// TTL = 300 seconds (5 minutes)
await kernel.data().cache().set('contacts:count', 48200, { ttl: 300 });

// TTL = 1 day
await kernel.data().cache().set('reports:q1-2026', reportData, { ttl: 86400 });

// No TTL — persists until deleted (⚠ use with caution)
await kernel.data().cache().set('feature-flags', flags);

Stored values are JSON-serialised automatically. Arrays, objects, numbers, and strings are all supported.

delete

Removes a single key immediately:

await kernel.data().cache().delete('contacts:summary');

Use delete when a specific record changes and you know exactly which key is stale.

invalidate (pattern)

Removes all keys matching a glob-style pattern. Useful after bulk operations where many cached entries may be stale:

// Invalidate all keys starting with 'contacts:'
await kernel.data().cache().invalidate('contacts:*');

// Invalidate a specific report family
await kernel.data().cache().invalidate('reports:q1-*');

Pattern invalidation uses Valkey SCAN + DEL internally — it is O(N) where N is the number of matching keys. For hot paths, prefer targeted delete over broad pattern invalidation.


REST Endpoint

Cache operations are available via the Data Layer REST API for server-side module consumers that do not use the TypeScript SDK:

GET https://api.septemcore.com/v1/data/cache/{key}
POST https://api.septemcore.com/v1/data/cache/{key}
DELETE https://api.septemcore.com/v1/data/cache/{key}
DELETE https://api.septemcore.com/v1/data/cache?pattern={pattern}

Read cache key

GET https://api.septemcore.com/v1/data/cache/contacts:summary
Authorization: Bearer <access_token>

Response 200 OK (cache hit):

{
"key": "contacts:summary",
"value": { "active": 38000, "inactive": 10200 },
"ttl": 242
}

Response 404 Not Found (cache miss):

{
"type": "https://api.septemcore.com/problems/not-found",
"status": 404,
"detail": "Cache key not found or expired."
}

Write cache key

POST https://api.septemcore.com/v1/data/cache/contacts:summary
Authorization: Bearer <access_token>
Content-Type: application/json

{
"value": { "active": 38000, "inactive": 10200 },
"ttl": 300
}

Response 204 No Content.

Delete cache key

DELETE https://api.septemcore.com/v1/data/cache/contacts:summary
Authorization: Bearer <access_token>

Response 204 No Content.

Pattern invalidation

DELETE https://api.septemcore.com/v1/data/cache?pattern=contacts:*
Authorization: Bearer <access_token>

Response 200 OK:

{
"invalidated": 14
}

Common Patterns

Cache-aside (read-through)

The most common pattern: check cache first, fall back to the database, store the result:

const CACHE_KEY = 'contacts:active-count';
const CACHE_TTL = 300; // 5 minutes

async function getActiveContactCount(): Promise<number> {
const cached = await kernel.data().cache().get(CACHE_KEY);
if (cached !== null) return cached as number;

const result = await kernel.data().analytics({
model: 'contacts',
aggregate: 'count',
filters: { status: 'eq.active' },
});
const count = result.data[0].count;

await kernel.data().cache().set(CACHE_KEY, count, { ttl: CACHE_TTL });
return count;
}

Invalidation on write using lifecycle hooks

Use an after lifecycle hook to invalidate cache whenever a record changes, keeping the cache consistent without polling:

// In module hook registration (Go — module server-side)
kernel.data().registerHook('contacts.update.after', async (event) => {
await kernel.data().cache().invalidate('contacts:*');
});

Atomic counter (rate limiting, request counting)

// Increment a counter — Valkey INCR is atomic
const count = await kernel.data().cache().increment('api:calls:today', { ttl: 86400 });
if (count > 10_000) {
throw new Error('Module API call limit reached.');
}

Valkey Cluster Configuration

ParameterValueNotes
Client libraryvalkey-io/valkey-goOfficial Go client
ConnectionPool-based (per service)Shared pool, not per-request connection
PersistenceAOF + RDBData survives restarts
Eviction policyallkeys-lruLRU eviction when memory is full
Max memory guardConfigurable (VALKEY_MAX_MEMORY)Alert at 80% utilisation
HA/Cluster modeValkey Cluster (3+ shards)No single point of failure

Modules share the same Valkey cluster but are isolated by key namespace ({tenantId}:{moduleId}:). A bug in one module's cache logic cannot read or overwrite another module's keys.


Error Reference

ScenarioHTTPtype URI suffix
Key not found (GET)404not-found
Value exceeds size limit (1 MB)400record-size-exceeded
Invalid pattern syntax400validation-error
Valkey unavailable503service-unavailable
Insufficient permission403forbidden