Data Layer Limits
All Data Layer limits are enforced at two points: the API Gateway
(before the request reaches the Data Layer) and the Data Layer itself
(for limits that require schema or payload inspection). Exceeding any
limit returns 400 Bad Request (RFC 9457) before a database query is
executed.
Record Limits
| Parameter | Limit | Enforcement point |
|---|---|---|
| Max record size | 1 MB | API Gateway checks Content-Length before forwarding |
| Max JSONB field size | 256 KB | Data Layer checks each JSONB column on INSERT/UPDATE |
| Max tables per module | 50 | Module Registry checks during install |
Why 1 MB
Industry standard: Supabase (1 MB), Firebase (1 MB), DynamoDB
(400 KB). PostgreSQL WAL and the Debezium CDC connector degrade
significantly on records > 1 MB. For storing large binary data — use
kernel.files() (File Primitive) and store the file URL in the record.
Why 256 KB for JSONB
JSONB columns are intended for metadata and configuration — not for files or large blobs. 256 KB is sufficient for any structured metadata object. Larger data belongs in a dedicated column (TEXT / BYTEA) with its own size budget, or in File Storage.
Why 50 tables per module
Prevents schema sprawl and keeps information_schema queries fast.
50 tables covers the most complex enterprise module. If a module
genuinely needs more, it is an architectural signal that it should be
split into two modules.
Query Limits
| Parameter | Default | Hard limit | Enforcement point |
|---|---|---|---|
Records per page (limit) | 20 | 100 | Data Layer — before SQL execution |
| Filters per request | — | 10 | Query parser — before SQL execution |
Relation depth (include / select) | — | 2 levels | Query parser — schema validation |
| Analytics query timeout | — | 30 seconds | ClickHouse query timeout setting |
Why 100 records per page
Cursor-based pagination means there is no performance cost to fetching the next page. However, serialising and transmitting 100+ records in a single HTTP response increases Gateway memory pressure and client parse time with diminishing returns. Clients that need > 100 records at once should paginate in a loop.
Why 10 filters
Complex WHERE clauses with many conditions can become expensive even
with indexes. 10 atomic filter conditions covers virtually every
real-world use case. Queries with more conditions should use
ClickHouse analytics instead of the CRUD query language.
Why depth 2 for relations
A LEFT JOIN chain of depth 3+ produces a Cartesian product risk
across three tables, making it impossible to guarantee bounded query
time. Depth 2 (e.g. contacts → company → industry) is the
enterprise-validated sweet spot: rich enough for UI needs, safe for
the OLTP database.
Environment Variable Overrides
All limits can be raised for enterprise deployments with specific requirements. Lower values than the defaults are also accepted (to enforce stricter SLAs on a specific tenant cluster):
| Environment variable | Default | Controls |
|---|---|---|
DATA_MAX_RECORD_SIZE_BYTES | 1048576 (1 MB) | Max single record size |
DATA_MAX_JSONB_SIZE_BYTES | 262144 (256 KB) | Max JSONB field size |
DATA_MAX_TABLES_PER_MODULE | 50 | Tables per module |
DATA_MAX_PAGE_LIMIT | 100 | Max records per page |
DATA_MAX_FILTERS_PER_QUERY | 10 | Filters per list/analytics request |
DATA_MAX_RELATION_DEPTH | 2 | Max include/select nesting depth |
DATA_HOOK_BEFORE_TIMEOUT_MS | 2000 | Before-hook execution timeout |
DATA_HOOK_AFTER_TIMEOUT_MS | 10000 | After-hook execution timeout per attempt |
DATA_PERMISSION_RECONCILIATION_ON_MIGRATE | true | Run permission reconciliation after migration |
DATA_ANALYTICS_QUERY_TIMEOUT_SEC | 30 | ClickHouse query timeout |
Changes to env variables require a service restart to take effect.
Limit Violation Responses
All limit violations return 400 Bad Request in RFC 9457 format:
Record size exceeded
{
"type": "https://api.septemcore.com/problems/record-size-exceeded",
"status": 400,
"detail": "Record payload exceeds maximum size of 1 MB.",
"extensions": {
"actual_bytes": 1250000,
"max_bytes": 1048576
}
}
JSONB field size exceeded
{
"type": "https://api.septemcore.com/problems/record-size-exceeded",
"status": 400,
"detail": "JSONB field 'metadata' exceeds maximum size of 256 KB.",
"extensions": {
"field": "metadata",
"actual_bytes": 300000,
"max_bytes": 262144
}
}
Page limit exceeded
{
"type": "https://api.septemcore.com/problems/validation-error",
"status": 400,
"detail": "Requested limit of 200 exceeds maximum of 100.",
"code": "PAGE_LIMIT_EXCEEDED"
}
Filter count exceeded
{
"type": "https://api.septemcore.com/problems/validation-error",
"status": 400,
"detail": "Query contains 12 filters; maximum is 10.",
"code": "FILTER_LIMIT_EXCEEDED"
}
Relation depth exceeded
{
"type": "https://api.septemcore.com/problems/validation-error",
"status": 400,
"detail": "include depth exceeds maximum of 2 levels.",
"code": "INCLUDE_DEPTH_EXCEEDED"
}
Summary Table
| Limit | Default | Override env variable |
|---|---|---|
| Max record size | 1 MB | DATA_MAX_RECORD_SIZE_BYTES |
| Max JSONB field | 256 KB | DATA_MAX_JSONB_SIZE_BYTES |
| Max tables / module | 50 | DATA_MAX_TABLES_PER_MODULE |
| Max records / page | 100 | DATA_MAX_PAGE_LIMIT |
| Max filters / query | 10 | DATA_MAX_FILTERS_PER_QUERY |
| Max relation depth | 2 | DATA_MAX_RELATION_DEPTH |
| Before-hook timeout | 2 000 ms | DATA_HOOK_BEFORE_TIMEOUT_MS |
| After-hook timeout | 10 000 ms | DATA_HOOK_AFTER_TIMEOUT_MS |
| Analytics timeout | 30 s | DATA_ANALYTICS_QUERY_TIMEOUT_SEC |