Show examples as

Container lifecycle

kcore supports running OCI containers alongside VMs. Container support uses nerdctl (preferred) or docker as the runtime backend.

Container support is in early phase. Treat it as an operator workflow for labs and controlled environments, not a full orchestration platform.

Create a container (direct node API)

Send a container creation request directly to a specific node agent:

kctl --node <IP>:9091 create container my-app \
  --image nginx:latest \
  --network default \
  --port 8080:80 \
  --env MY_VAR=hello

Controller workload API

Alternatively, create a container workload through the controller:

kctl workload create my-app \
  --kind container \
  --image nginx:latest \
  --network default \
  --target-node <NODE_ID>

The direct node path (--node) sends the request straight to the node agent. The workload path goes through the controller, which handles scheduling, records the workload in its state, and pushes the desired configuration to the target node.

Lifecycle operations

OperationCommand
Startkctl start container <name>
Stopkctl stop container <name>
Deletekctl delete container <name>
Getkctl get container <name>
Listkctl get containers

Container spec options

FlagDescription
--imageOCI image reference (e.g. nginx:latest)
--networkBridge name to attach the container to
--portPort mapping in host:container format
--envEnvironment variable in KEY=VALUE format
--cmdOverride the image entrypoint command

Storage

Containers can optionally mount a volume at /data. When using the controller workload path, specify the storage backend and size:

kctl workload create my-app \
  --kind container \
  --image nginx:latest \
  --network default \
  --target-node <NODE_ID> \
  --storage-backend local \
  --storage-size-bytes 10737418240

Current limitations