Dev Environment Setup
This guide takes you from a fresh machine to a fully running Platform-Kernel infrastructure stack so you can write, test, and iterate on modules with production-identical services.
Stack parity. The local Docker Compose stack uses the same
image versions as staging and production (versions are pinned in
docker/versions.env). There are no "dev-mode only" shortcuts in
infrastructure — what you test locally is what runs in prod.
Prerequisites
Install the following tools before proceeding. Exact versions matter.
| Tool | Required Version | Install |
|---|---|---|
| Go | 1.26.1 | go.dev/dl |
| Node.js | 24 LTS (Krypton) | nodejs.org or fnm install 24 |
| pnpm | 10.33.0 | npm install -g [email protected] |
| Docker Desktop | 4.69.0 (Engine 29.4.0) | docker.com |
| Git | 2.40+ | System package manager |
| buf | v1.68.2 | brew install bufbuild/buf/buf (optional — only for proto changes) |
Verify your environment:
go version # go version go1.26.1 linux/amd64
node --version # v24.x.x
pnpm --version # 10.33.0
docker version # Engine: 29.4.0
Apple Silicon (M-series) note. Two services in the stack
require platform: linux/amd64 (ClickHouse and ClamAV). Docker
Desktop on macOS uses Rosetta 2 emulation for these automatically.
Ensure "Use Rosetta for x86/amd64 emulation on Apple Silicon"
is enabled in Docker Desktop → Settings → General.
Repository Layout
The entire kernel monorepo lives under .dev/kernel/ — a Go workspace combined
with a pnpm workspace:
.dev/kernel/
├── services/ # 15 Go microservices
│ ├── gateway/ # API Gateway (port 8051)
│ ├── iam/ # Identity & Access Management (port 8050)
│ ├── data-layer/ # Data CRUD + Analytics (port 8052)
│ ├── event-bus/ # Kafka + RabbitMQ bus (port 8053)
│ ├── notify/ # Notification Service (port 8054)
│ ├── billing/ # Billing & Subscriptions (port 8055)
│ ├── files/ # File Storage / S3 (port 8057)
│ ├── money/ # Money Service (port 8058)
│ ├── audit/ # Audit Log / ClickHouse (port 8059)
│ ├── integration-hub/ # External integrations (port 8056)
│ ├── module-registry/ # Module Registry (port 50060)
│ ├── domain-resolver/ # Custom Domain Mapping
│ ├── kernel-cli/ # Admin CLI
│ ├── vault/ # HashiCorp Vault client library
│ └── shared/ # Crypto, mTLS, Feature Flags helpers
│
├── packages/ # 22 TypeScript packages (SDK + tooling)
│ ├── sdk-auth/ # @platform/sdk-auth
│ ├── sdk-data/ # @platform/sdk-data
│ ├── sdk-events/ # @platform/sdk-events
│ ├── sdk-notify/ # @platform/sdk-notify
│ ├── sdk-files/ # @platform/sdk-files
│ ├── sdk-money/ # @platform/sdk-money
│ ├── sdk-audit/ # @platform/sdk-audit
│ ├── sdk-flags/ # @platform/sdk-flags
│ ├── sdk-core/ # @platform/sdk-core (shared types + HTTP client)
│ ├── sdk-ui/ # @platform/sdk-ui (Design System)
│ ├── sdk-testing/ # @platform/sdk-testing (mocks + fixtures)
│ ├── sdk-codegen/ # @platform/sdk-codegen (OpenAPI → TS)
│ ├── create-module/ # npx @platform/create-module CLI
│ ├── ui-shell/ # Admin UI Shell (Vite 8 + React 19 + MF 2.0)
│ ├── build-config/ # Shared Vite / ESLint / tsconfig
│ ├── dev-server/ # Local hot-reload development server
│ └── ...
│
├── proto/platform/ # gRPC Protobuf contracts (7 packages)
├── docker/ # Docker Compose configs + versions.env
├── sandbox/ # Sandbox environment
├── migrations/ # Shared DB migrations
├── go.work # Go workspace (links all services)
├── pnpm-workspace.yaml
└── .env.example # Environment variable template
Step 1 — Clone and Configure Environment
# Clone (replace with your actual remote)
cd platform-kernel/.dev/kernel
# Copy environment template
cp .env.example .env
The .env file contains all required variables. Defaults work out of the
box for local development. For reference, here are the most important groups:
# PostgreSQL
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_DB=platform_kernel
POSTGRES_USER=kernel
POSTGRES_PASSWORD=CHANGEME_postgres_password
# Valkey (Redis-compatible cache)
VALKEY_HOST=localhost
VALKEY_PORT=6379
VALKEY_PASSWORD=CHANGEME_valkey_password
# Kafka (KRaft mode — no ZooKeeper)
KAFKA_BROKER=localhost:9092
KAFKA_BROKERS=localhost:9092
KAFKA_CLIENT_ID=platform-kernel
# RabbitMQ
RABBITMQ_HOST=localhost
RABBITMQ_PORT=5672
RABBITMQ_USER=kernel
RABBITMQ_PASSWORD=CHANGEME_rabbitmq_password
RABBITMQ_VHOST=/platform
# ClickHouse (OLAP — audit + analytics)
CLICKHOUSE_HOST=localhost
CLICKHOUSE_HTTP_PORT=8123
CLICKHOUSE_NATIVE_PORT=9000
CLICKHOUSE_DB=platform_audit
CLICKHOUSE_USER=kernel
CLICKHOUSE_PASSWORD=CHANGEME_clickhouse_password
# JWT (ES256 — filled by HashiCorp Vault in staging/prod)
JWT_SECRET=CHANGEME_jwt_secret_min_32_chars
JWT_ACCESS_TTL=900 # 15 minutes
JWT_REFRESH_TTL=604800 # 7 days
# Gateway
GATEWAY_PORT=8080
GATEWAY_RATE_LIMIT_PER_TENANT=1000
GATEWAY_RATE_LIMIT_PER_IP=20
GATEWAY_RATE_LIMIT_AUTH_PER_MIN=5
# Database connection pool (all Go services)
DB_MAX_OPEN_CONNS=25
DB_MAX_IDLE_CONNS=5
Never commit .env to Git. It is listed in .gitignore. Use
a secrets manager (HashiCorp Vault, 1Password Secrets Automation)
for team secret sharing.
Step 2 — Start the Infrastructure Stack
The full stack is defined in docker/docker-compose.yml with image versions
pinned in docker/versions.env:
| Image | Version (April 2026) |
|---|---|
| PostgreSQL | 17-alpine |
| Valkey | 8.1-alpine |
| Apache Kafka (KRaft) | 3.9.0 |
| RabbitMQ | 4.1-management-alpine |
| ClickHouse | 25.3-alpine |
| SeaweedFS (S3) | 3.84 |
| HashiCorp Vault | 1.19 |
| GoFeatureFlag | v1.42.0 |
| ClamAV | 1.4 |
| Envoy | v1.33-latest |
Launch everything with a single command from .dev/kernel/:
docker compose --env-file docker/versions.env up -d
Wait for all services to become healthy (takes ~2–3 minutes on first boot due to ClamAV signature download and ClickHouse WAL initialization):
docker compose ps
Expected output — all STATUS should be healthy:
NAME STATUS
platform-postgres healthy
platform-valkey healthy
platform-kafka healthy
platform-rabbitmq healthy
platform-clickhouse healthy
platform-seaweedfs healthy
platform-vault healthy
platform-feature-flags running # healthcheck disabled — distroless image
platform-clamav healthy # takes ~120s on first boot
Step 3 — Service Port Map
Once running, the following local ports are exposed. All API calls go through
the Gateway (localhost:8051) — direct service ports are for observability
and debugging only.
| Service | HTTP Port | gRPC Port | Notes |
|---|---|---|---|
| Gateway | 8051 | 50051 | Primary API entry point |
| IAM | 8050 | 50050 | Identity & Access Management |
| Data Layer | 8052 | 50052 | CRUD + analytics |
| Event Bus | 8053 | 50053 | Kafka + RabbitMQ bridge |
| Notify | 8054 | 50054 | Notifications |
| Billing | 8055 | 50055 | Subscriptions + limits |
| Integration Hub | 8056 | 50056 | External integrations |
| Files | 8057 | 50057 | S3 object storage |
| Money | 8058 | 50058 | Wallets + transactions |
| Audit | 8059 | 50059 | Immutable audit log |
| SeaweedFS S3 | 8333 | — | S3-compatible object store |
| HashiCorp Vault | 8200 | — | Secret management |
| RabbitMQ Admin | 15672 | — | Management UI |
| ClickHouse HTTP | 8123 | — | HTTP query interface |
| GoFeatureFlag | 1031 | — | Feature flag evaluation API |
| Envoy Admin | 9901 | — | Proxy admin interface |
Verify the Gateway is healthy:
curl -s http://localhost:8051/health | jq .status
# "ok"
Step 4 — Install TypeScript Dependencies
From .dev/kernel/:
pnpm install
pnpm uses the workspace defined in pnpm-workspace.yaml — a single install
bootstraps all 22 packages simultaneously.
Step 5 — Build SDK Packages
The SDK packages must be built before your module can import them:
pnpm --filter "./packages/**" run build
For hot-reload development of sdk-* packages while writing a module:
pnpm --filter "@platform/sdk-data" run dev
Step 6 — Shared Tooling
packages/build-config
The @platform/build-config package is the Single Source of Truth for all build
tooling shared across modules:
- Shared Vite configuration (Module Federation 2.0 presets)
EnterpriseSingletons— thesharedblock forfederation.config.js. Do not hardcoderequiredVersionin your module. Updatepackages/build-config/src/mf-singletons.tsinstead.- Shared ESLint config (
eslint.config.mjs) - Shared
tsconfig.base.json
packages/dev-server
The @platform/dev-server package provides a local hot-reload proxy that sits
in front of your module and the UI Shell:
# From your module directory
pnpm dev
The dev server proxies all /api/v1/* requests to localhost:8051 (Gateway)
and serves your module's Module Federation remote entry at
http://localhost:3001/remoteEntry.js.
Linting & Code Quality
The monorepo enforces strict quality gates:
# Go services — golangci-lint (config in .golangci.yml)
golangci-lint run ./...
# TypeScript — ESLint (config in eslint.config.mjs)
pnpm lint
# TypeScript type-checking
pnpm typecheck
# Go tests with coverage
go test ./... -coverprofile=coverage.out
go tool cover -func=coverage.out
The CI pipeline runs all three in sequence. Green local linting = green CI (there are no CI-only rules). The target coverage floor for every Go package is 80%.
Environment Validation Checklist
Before starting module development, verify all services are reachable:
# Gateway is up
curl -sf http://localhost:8051/health
# Kafka is accepting connections
docker exec platform-kafka /opt/kafka/bin/kafka-broker-api-versions.sh \
--bootstrap-server localhost:9092
# PostgreSQL is accepting queries
docker exec platform-postgres psql -U kernel -d platform_kernel -c "SELECT version();"
# Vault is initialized (dev mode — in-memory, no unsealing required)
curl -sf http://localhost:8200/v1/sys/health | jq .initialized
# true
# Feature flags evaluation
curl -sf http://localhost:1031/health | jq .initialized
# true
All five checks passing means your environment is ready for Sandbox and module development.