Installation
System requirements
- Linux host (x86_64) with a KVM-enabled CPU (Intel VT-x or AMD-V).
- 4 GB RAM minimum, 8 GB+ recommended.
- 20 GB+ free disk space for the OS; additional disks for VM storage.
- Network access between all nodes and at least one controller.
Overview
Building a kcore cluster takes two commands per node from your operator workstation. No SSH, no manual config editing, and no interactive prompts.
- Create cluster PKI — one-time operation that generates all trust material.
- Install each node —
kctl node installpartitions the disk, deploys NixOS with LUKS encryption, configures mTLS certificates, and reboots the node into a ready state.
Boot the kcore ISO
Download the kcore NixOS ISO and write it to a USB drive. Boot the target machine from it. The live environment starts a node-agent on port 9091 that accepts install commands from kctl. The default root password is kcore.
Step 1 — Create the cluster PKI
On your operator workstation (where kctl is installed), initialise the cluster trust material. This generates a root CA, sub-CA, controller certificate, and kctl client certificate. The --controller flag sets the address of the first controller node you will install.
kctl create cluster \
--controller 192.168.40.107:9090 \
--context my-cluster
This is a one-time operation per cluster. Certificate material is saved under ~/.kcore/certs/ and the mTLS context is written to ~/.kcore/config with all certificate data embedded inline. Every subsequent kctl command and every kctl node install call uses this context automatically.
If you already have a config file from a previous cluster, add --force to overwrite the existing certificate files.
Step 2 — Install the first node
With the target machine booted from the ISO, install kcore to disk. Use --run-controller to start a controller on this node. The -k flag is required because the live ISO does not have mTLS configured yet.
kctl node install \
--node 192.168.40.107:9091 \
--os-disk /dev/sda \
--run-controller \
--dc-id DC1 \
-k
The installer partitions the disk with LUKS encryption (TPM2-sealed when available), deploys NixOS, configures the controller and node-agent services, provisions mTLS certificates, and reboots. After reboot, the controller listens on :9090, the node-agent on :9091, and the dashboard on :8080.
The node registers with its own controller and is auto-approved by default.
Step 3 — Verify the cluster
After the node reboots, kctl connects over mTLS using the context created in step 1. No additional flags needed:
kctl get nodes
You should see the node with status ready and approval approved. The dashboard is available at http://<NODE_IP>:8080.
TLS modes
| Mode | Flag | Use case |
|---|---|---|
| mTLS (default) | none | Production — mutual certificate verification. Used for all post-install communication. |
| Insecure | -k / --insecure | Initial install only — the live ISO node-agent runs without TLS, so -k is required during kctl node install. After reboot all traffic switches to mTLS automatically. |
The -k flag is only needed during kctl node install when communicating with the live ISO. After the node is installed and rebooted, all communication uses mTLS. Never use -k with an installed cluster.
What the installer does
Each kctl node install performs the following non-interactively:
- Disk wipe — wipes all signatures and partition tables from the target disk.
- Partitioning — creates an ESP boot partition and a LUKS-encrypted root partition (TPM2-sealed when a TPM is available, key-file fallback otherwise).
- NixOS install — generates a NixOS configuration, installs the OS, and deploys kcore binaries.
- Certificate provisioning — copies the cluster CA and sub-CA, generates a host-specific controller certificate (if
--run-controller), and requests a node certificate from the controller (for agent-only installs). - Configuration — writes
controller.yaml,node-agent.yaml, anddashboard.envwith the correct addresses, replication peers, and TLS paths. - Reboot — unmounts all filesystems and reboots into the installed system.
Next steps
- First cluster — full walkthrough with VM creation.
- Add a node — agent-only nodes, HA controllers, and cross-DC.