Docker Compose Setup
Docker Compose is the canonical way to run Platform-Kernel locally and on a single-node staging server. One command brings up the complete production-identical stack — no mocks, no stubs.
All image versions are pinned in docker/versions.env — the single
source of truth for the entire deployment pipeline.
Stack Overview
The full stack comprises 24 containers organised into two layers:
Compose File Structure
docker/
├── versions.env ← SSOT: all image tags pinned here
├── docker-compose.yml ← Full local stack (24 containers)
├── docker-compose.ci.yml ← CI server: SonarQube + PG 16
├── docker-compose.gateway.yaml ← Gateway + Envoy only
├── docker-compose.kafka.yaml ← Kafka standalone for event testing
├── docker-compose.pact.yml ← Pact Broker overlay
├── docker-compose.rabbitmq.yaml ← RabbitMQ standalone
├── docker-compose.sandbox.yml ← Dev sandbox (seeded data + Envoy)
├── postgres/
│ └── 01_init_databases.sh ← Creates per-service databases
├── clickhouse/ ← ClickHouse config + users.xml
├── vault/ ← Vault config (dev mode)
├── envoy/ ← Envoy listener + route config
├── seaweedfs/ ← SeaweedFS filer config
└── feature-flags/ ← GoFeatureFlag flags.yaml
Quick Start
Full Stack (all 24 containers)
# From the monorepo root (.dev/kernel/)
docker compose \
--env-file docker/versions.env \
-f docker/docker-compose.yml \
up -d --wait
--wait blocks until every container with a healthcheck reports
healthy. On a cold start (first pull) expect 3–5 minutes. Subsequent
starts take under 30 seconds.
Infrastructure Only (for local Go service development)
# Brings up stateful services only — run Go services with `go run`
docker compose \
--env-file docker/versions.env \
-f docker/docker-compose.yml \
up -d --wait \
postgres valkey kafka clickhouse rabbitmq vault seaweedfs clamav
Stop and Reset
# Stop — preserve data volumes
docker compose -f docker/docker-compose.yml down
# Full reset — destroy all volumes
docker compose \
--env-file docker/versions.env \
-f docker/docker-compose.yml \
down -v --remove-orphans
Environment Variables
All secrets use safe defaults for local development. Do not use these values in production.
Passwords (required in production, defaulted in dev)
| Variable | Default (dev) | Service |
|---|---|---|
POSTGRES_PASSWORD | kernel_dev_password | All services connecting to PG |
VALKEY_PASSWORD | valkey_dev_password | Valkey authentication |
VAULT_TOKEN | kernel-dev-root-token | Vault dev mode root token |
JWT_PRIVATE_KEY | Self-signed ES256 PEM (base64) | IAM JWT signing |
JWT_PUBLIC_KEY | Corresponding public key (base64) | IAM JWT verification |
Go Service Environment Variables
All Go services read configuration from environment variables. The
full reference is in Configuration. Key
variables injected by docker-compose.yml:
| Variable | Example value | Description |
|---|---|---|
DATABASE_URL | (See below) | PostgreSQL DSN |
KAFKA_BROKERS | kafka:9092 | Kafka broker address |
VALKEY_ADDR | valkey:6379 | Valkey address |
VAULT_ADDR | http://vault:8200 | Vault API address |
IAM_GRPC_ADDR | iam:50050 | IAM gRPC address (Gateway) |
DATA_LAYER_GRPC_ADDR | data-layer:50052 | Data Layer gRPC (Gateway) |
BILLING_GRPC_ADDR | billing:50055 | Billing gRPC (Gateway) |
NOTIFY_GRPC_ADDR | notify:50054 | Notify gRPC (Gateway) |
MODULE_REGISTRY_GRPC_ADDR | module-registry:50060 | Module Registry gRPC |
DATABASE_URL Example:
postgres://kernel:***@postgres:5432/platform_kernel?sslmode=disable
Service-Specific Ports
| Service | HTTP port | gRPC port |
|---|---|---|
| Gateway | 8051 | 50051 |
| IAM | 8050 | 50050 |
| Data Layer | 8052 | 50052 |
| Billing | 8055 | 50055 |
| Integration Hub | 8056 | 50056 |
| Notify | — | 50054 |
| Files | — | 50057 |
| Money | — | 50058 |
| Audit | — | 50059 |
| Module Registry | — | 50060 |
| Domain Resolver | 8061 | 50061 |
Database Initialisation
PostgreSQL runs a one-time init script on cold start
(postgres/01_init_databases.sh). This script creates separate
databases for each service:
# Databases created automatically on first boot:
platform_kernel ← IAM, Data Layer, Billing, Money, Module Registry
platform_domain ← Domain Resolver
# Each service applies its own migrations via goose at startup.
ClickHouse tables are created by Debezium and the Audit service on first connection.
Startup Dependency Order
Docker Compose enforces startup ordering via depends_on with
condition: service_healthy. The dependency chain:
All 12 Go services wait for:
postgreshealthy (pg_isready -U kernel -d platform_kernel)vaulthealthy (GET /v1/sys/health)- Respective
migrate-*container completed (goose up)
IAM additionally waits for Vault. Kafka consumers (Event Bus, Audit, Money, Billing) wait for Kafka KRaft broker healthy.
Health Check Verification
After docker compose up --wait, verify the full stack:
# Gateway liveness (process alive)
curl -sf http://localhost:8051/health/live
# → {"status":"alive"}
# Gateway full health (all dependency checks)
curl -s http://localhost:8051/health | python3 -m json.tool
# → {"status":"healthy","service":"gateway","checks":{...}}
# IAM service
curl -sf http://localhost:8050/health/ready
# → {"status":"ready"}
# PostgreSQL
docker exec platform-postgres pg_isready -U kernel -d platform_kernel
# → /var/run/postgresql:5432 - accepting connections
# Kafka broker (KRaft)
docker exec platform-kafka \
/opt/kafka/bin/kafka-broker-api-versions.sh \
--bootstrap-server localhost:9092 2>&1 | grep "broker version"
# ClickHouse
curl -sf "http://localhost:8123/?query=SELECT%201"
# → 1
# Valkey
docker exec platform-valkey \
valkey-cli -a valkey_dev_password ping
# → PONG
# Vault (dev mode — always unsealed)
curl -sf http://localhost:8200/v1/sys/health | python3 -m json.tool
# → {"initialized":true,"sealed":false,...}
Volumes
Named volumes persist data across container restarts:
| Volume | Contents |
|---|---|
postgres_data | PostgreSQL WAL + data files |
clickhouse_data | ClickHouse tables + parts |
kafka_data | Kafka KRaft log + metadata |
rabbitmq_data | RabbitMQ queues + messages |
valkey_data | Valkey RDB snapshots |
seaweedfs_data | File chunks + filer metadata |
clamav_data | ClamAV virus signature databases |
sonarqube_data | SonarQube analysis data |
Compose Profiles Overview
| File | Purpose | Use case |
|---|---|---|
docker-compose.yml | Full local stack (24 containers) | Development |
docker-compose.ci.yml | SonarQube + PG 16 | CI server |
docker-compose.sandbox.yml | Seeded data + Envoy ingress overlay | Module dev sandbox |
docker-compose.kafka.yaml | Kafka only | Event integration tests |
docker-compose.rabbitmq.yaml | RabbitMQ only | Queue testing |
docker-compose.gateway.yaml | Gateway + Envoy | API gateway testing |
docker-compose.pact.yml | Pact Broker + PG | Contract testing |
Sandbox Overlay (module development)
The sandbox overlay adds Envoy as a single ingress and injects seed data (test tenant, users, wallets):
docker compose \
--env-file docker/versions.env \
-f docker/docker-compose.yml \
-f docker/docker-compose.sandbox.yml \
up -d --wait
All traffic routes through http://localhost:8080/api/v1/.
Resource Limits
Every container declares explicit memory limits in deploy.resources:
| Container | RAM limit |
|---|---|
platform-clickhouse | 2G |
platform-kafka | 512M |
platform-postgres | 512M |
platform-files | 512M (libvips headroom) |
platform-sonarqube | 2G (CI only) |
platform-rabbitmq | 256M |
platform-valkey | 256M |
| Go services (12) | 128M each |
Total steady-state: approximately 9 GB RAM.
See Also
- Requirements — hardware and software prerequisites
- Kubernetes Deployment — production multi-node deployment
- Configuration Reference — all environment variables and defaults
- Vault Setup — JWT signing key management