Show examples as

Add nodes to the cluster

A single-node kcore deployment is useful for evaluation, but production workloads benefit from multiple nodes for scheduling headroom, high-availability controllers, and cross-datacenter replication. This page walks through expanding your cluster from one node to many.

Every kctl node install below requires the -k flag because the live ISO node-agent runs without TLS. After reboot all traffic switches to mTLS automatically.

Add a worker node (agent only)

Boot the new machine from the kcore ISO. Once the live environment is up, install it as an agent-only node pointing to your existing controllers:

kctl node install \
  --node 192.168.40.151:9091 \
  --os-disk /dev/sda \
  --join-controller 192.168.40.107:9090 \
  --join-controller 192.168.40.105:9090 \
  --dc-id DC1 \
  -k

You can specify multiple --join-controller flags so the node agent knows about all controllers for failover. This node runs the node agent only—no controller process.

After reboot the node registers with the controller and is auto-approved by default. Verify with:

kctl get nodes

Add a second controller (HA)

For HA you want at least two controllers. Add both --run-controller and --join-controller so the new node starts its own controller and joins the existing one:

kctl node install \
  --node 192.168.40.105:9091 \
  --os-disk /dev/sda \
  --run-controller \
  --join-controller 192.168.40.107:9090 \
  --dc-id DC1 \
  -k

After reboot, the two controllers automatically discover each other and begin CRDT replication. Your kctl config is auto-updated to include the new controller endpoint, so kctl can fail over transparently:

kctl get nodes

Both controllers will show an identical view of the cluster. You can also verify from a specific controller:

kctl -s 192.168.40.105:9090 get nodes
kctl -s 192.168.40.107:9090 get nodes

Cross-DC: add a node in DC2

Multi-datacenter deployments tag each node with a datacenter identifier. Install the node with --dc-id DC2 and --run-controller so it brings up a controller in the second site:

kctl node install \
  --node <NEW_NODE_IP>:9091 \
  --os-disk /dev/sda \
  --run-controller \
  --join-controller 192.168.40.107:9090 \
  --dc-id DC2 \
  -k

Resources created on this node are tagged with DC2. Controllers across datacenters replicate state via conflict-free replicated data types (CRDTs) with adaptive cross-DC timeouts, so each site maintains an eventually-consistent view of the cluster.

You can target a specific datacenter when creating VMs:

kctl create vm db-replica --dc DC2 \
  --image <image-url> --network default

Registration and approval

By default, nodes are auto-approved on registration. This means a node with a valid mTLS certificate issued by the cluster CA is immediately available for scheduling after reboot.

If you prefer manual approval, set requireManualApproval: true in the controller configuration. In manual mode, nodes register as pending and must be explicitly approved:

kctl get nodes
kctl node approve <NODE_ID>

If a registration is unexpected or suspicious, reject it:

kctl node reject <NODE_ID>

Even with auto-approval enabled, all node communication is authenticated via mTLS. Only nodes with a certificate signed by the cluster CA can register.

Certificates

Node certificates are renewed automatically when within 30 days of expiry. Operators can rotate the sub-CA at any time:

kctl rotate sub-ca

Inspect certificate expiry for all nodes:

kctl get nodes

Controller replication

When a second controller joins the cluster, replication starts automatically:

  1. The new controller is configured with the first controller as a replication peer.
  2. On first connection, the new controller pulls all existing replication events (node registrations, networks, VMs, etc.).
  3. The first controller discovers the new controller via the replication ack handshake and begins polling it in return.
  4. Both controllers now maintain a fully converged, eventually-consistent view of the cluster state.

No manual peering configuration is needed. Controller discovery is handled via CRDT-replicated controller.register events.

Quick reference

ActionCommand
List disks / NICskctl --node <IP>:9091 -k node disks / node nics
Install (agent only)kctl node install --node <IP>:9091 --os-disk /dev/sda --join-controller <host:port> --dc-id DC1 -k
Install (with controller)kctl node install --node <IP>:9091 --os-disk /dev/sda --join-controller <host:port> --run-controller --dc-id DC1 -k
List / describe nodeskctl get nodes / kctl get node <ID>
Approve / rejectkctl node approve <ID> / kctl node reject <ID>
Rotate sub-CAkctl rotate sub-ca
Multi-controller querykctl -s <IP1>:9090 -s <IP2>:9090 get nodes

Optional install flags

FlagDescription
--data-disk <dev>Additional data disk (repeatable).
--data-disk-mode <mode>Storage mode for data disks: filesystem, lvm, or zfs (default: filesystem).
--storage-backend <type>Storage backend: filesystem, lvm, or zfs.
--disable-vxlanDisable VXLAN overlay networking on this node.
--hostname <name>Override the hostname detected from the machine.
--node-id <uuid>Set a specific node UUID instead of auto-generating one.
--lvm-vg-name <name>LVM volume group name (only with --storage-backend lvm).
--zfs-pool-name <name>ZFS pool name (only with --storage-backend zfs).
--run-controllerStart a controller process on this node (HA / multi-DC).
--join-controller <host:port>Existing controller to join (repeatable for multiple controllers).
--dc-id <id>Datacenter identifier for this node (default: DC1).

Example: 3-node cluster walkthrough

Starting from zero, a full 3-node cluster with 2 controllers takes four commands:

# 1. Create cluster PKI (one-time)
kctl create cluster --controller 192.168.40.107:9090 --context prod

# 2. Install first node as controller
kctl node install --node 192.168.40.107:9091 \
  --os-disk /dev/sda --run-controller --dc-id DC1 -k

# 3. Install second node as controller (joins first)
kctl node install --node 192.168.40.105:9091 \
  --os-disk /dev/sda --run-controller \
  --join-controller 192.168.40.107:9090 --dc-id DC1 -k

# 4. Install third node as worker (joins both controllers)
kctl node install --node 192.168.40.151:9091 \
  --os-disk /dev/sda \
  --join-controller 192.168.40.107:9090 \
  --join-controller 192.168.40.105:9090 --dc-id DC1 -k

After each node reboots, it auto-registers and auto-approves. Controllers auto-discover each other and replicate state via CRDTs. Run kctl get nodes to verify the full cluster.