YAML manifest reference
kcore supports declarative resource creation via YAML manifests. Every resource that can be created with kctl CLI flags can also be expressed as a YAML manifest, making manifests the preferred way to manage infrastructure as code.
All manifest kinds are auto-detected and applied with a single command:
kctl apply -f <file.yaml>
You are browsing in CLI mode. This page is the YAML manifest reference—switch the toggle above to YAML for the intended view, or open the kctl CLI reference for commands only.
VM manifest (kind: VM)
| Field | Type | Description |
|---|---|---|
kind | string | Must be "VM" |
metadata.name | string | Name of the VM |
spec.cpu | integer | Number of vCPUs (default 2) |
spec.memoryBytes | string | integer | RAM size, e.g. "4G" or raw bytes |
spec.storageBackend | string | filesystem | lvm | zfs |
spec.storageSizeBytes | integer | string | Root disk size in bytes or human-readable, e.g. "20G" |
spec.targetNode | string | Pin VM to a specific node (optional) |
spec.dc | string | Target datacenter for scheduling (optional, mutually exclusive with targetNode) |
spec.sshKeys | string[] | List of SSH key names to inject (must exist in the cluster) |
spec.cloudInitUserData | string | Full cloud-init user-data (multi-line YAML block) |
spec.disks[] | array | List of disk objects |
spec.disks[].backendHandle | string | URL or local path to disk image |
spec.disks[].sha256 | string | Hex-encoded SHA-256 checksum |
spec.disks[].format | string | qcow2 | raw |
spec.nics[] | array | List of network interfaces (supports multiple NICs) |
spec.nics[].network | string | Network name (default "default") |
spec.nics[].model | string | NIC model (default "virtio") |
spec.imageSha256 | string | Top-level alternative to disks[].sha256 |
spec.imageFormat | string | Top-level alternative to disks[].format |
VM examples
LVM-backed VM with SSH key injection:
kind: VM
metadata:
name: web-server
spec:
cpu: 4
memoryBytes: "4G"
storageBackend: lvm
storageSizeBytes: "20G"
targetNode: kvm-node-01
sshKeys:
- my-deploy-key
nics:
- network: vxlan-prod
- network: nat-mgmt
disks:
- image: https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2
sha256: "<sha256>"
format: qcow2
ZFS-backed VM with cloud-init:
kind: VM
metadata:
name: db-server
spec:
cpu: 8
memoryBytes: "16G"
storageBackend: zfs
storageSizeBytes: "100G"
dc: dc-eu-west
sshKeys:
- ops-team
cloudInitUserData: |
#cloud-config
packages:
- postgresql-15
runcmd:
- systemctl enable postgresql
- systemctl start postgresql
nics:
- network: vxlan-prod
disks:
- image: https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2
sha256: "<sha256>"
format: qcow2
Network manifest (kind: Network)
| Field | Type | Description |
|---|---|---|
kind | string | Must be "Network" |
metadata.name | string | Name of the network |
spec.type | string | nat | vxlan | bridge |
spec.gatewayIp | string | Gateway IP for the internal network |
spec.externalIp | string | External IP for NAT/bridge (optional) |
spec.internalNetmask | string | Netmask for internal subnet (default "255.255.255.0") |
spec.vlanId | integer | VLAN tag (required for bridge networks) |
spec.enableOutboundNat | boolean | Enable outbound NAT masquerade (default true for nat/vxlan) |
spec.targetNode | string | Pin network to a specific node (optional, NAT only) |
spec.allowedTcpPorts | integer[] | TCP ports to open on the network (optional) |
spec.allowedUdpPorts | integer[] | UDP ports to open on the network (optional) |
Network examples
VXLAN overlay (multi-node):
kind: Network
metadata:
name: vxlan-prod
spec:
type: vxlan
gatewayIp: 10.200.0.1
internalNetmask: "255.255.255.0"
enableOutboundNat: true
NAT network (single node):
kind: Network
metadata:
name: nat-mgmt
spec:
type: nat
gatewayIp: 10.100.0.1
internalNetmask: "255.255.255.0"
targetNode: kvm-node-01
Bridge network with VLAN:
kind: Network
metadata:
name: bridge-vlan100
spec:
type: bridge
vlanId: 100
externalIp: 192.168.100.1
gatewayIp: 192.168.100.1
internalNetmask: "255.255.255.0"
enableOutboundNat: false
SshKey manifest (kind: SshKey)
| Field | Type | Description |
|---|---|---|
kind | string | Must be "SshKey" |
metadata.name | string | Name of the SSH key (referenced by VM manifests) |
spec.publicKey | string | The full SSH public key string |
Example
kind: SshKey
metadata:
name: my-deploy-key
spec:
publicKey: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIExample... ops@company.com"
Container manifest (kind: Container)
| Field | Type | Description |
|---|---|---|
kind | string | Must be "Container" |
metadata.name | string | Name of the container |
spec.image | string | OCI image reference (e.g. docker.io/library/nginx:latest) |
spec.network | string | Network name to attach to (optional) |
spec.ports | string[] | Port mappings, e.g. "80:80" |
spec.env | map | Environment variables as key-value pairs |
spec.command | string[] | Override container entrypoint command (optional) |
Example
kind: Container
metadata:
name: nginx-web
spec:
image: docker.io/library/nginx:latest
network: nat-mgmt
ports:
- "80:80"
- "443:443"
env:
NGINX_HOST: example.com
SecurityGroup manifest (kind: SecurityGroup)
| Field | Type | Description |
|---|---|---|
kind | string | Must be "SecurityGroup" |
metadata.name | string | Name of the security group |
spec.description | string | Human-readable description |
spec.rules[] | array | List of firewall rules |
spec.rules[].protocol | string | tcp | udp |
spec.rules[].hostPort | integer | Port on the host side |
spec.rules[].targetPort | integer | Port inside the VM/container (optional, defaults to hostPort) |
spec.rules[].sourceCidr | string | Allowed source CIDR (default "0.0.0.0/0") |
spec.rules[].targetVm | string | Target VM name (optional) |
spec.rules[].enableDnat | boolean | Enable DNAT for this rule |
spec.attachments[] | array | Resources this group is attached to |
spec.attachments[].kind | string | vm | network |
spec.attachments[].target | string | Name or ID of the target resource |
spec.attachments[].targetNode | string | Node name (required when kind is network) |
Example
kind: SecurityGroup
metadata:
name: web-sg
spec:
description: Allow HTTP and HTTPS traffic
rules:
- protocol: tcp
hostPort: 80
targetPort: 80
sourceCidr: "0.0.0.0/0"
- protocol: tcp
hostPort: 443
targetPort: 443
sourceCidr: "0.0.0.0/0"
enableDnat: true
targetVm: web-01
attachments:
- kind: vm
target: web-01
Field naming
Both camelCase (recommended) and snake_case are accepted for most fields:
| camelCase (preferred) | snake_case alternative |
|---|---|
storageBackend | storage_backend |
storageSizeBytes | storage_size_bytes |
targetNode | target_node |
targetDc | target_dc |
sshKeys | ssh_keys |
cloudInitUserData | cloud_init_user_data |
backendHandle | backend_handle |
imageSha256 | image_sha256 |
imageFormat | image_format |
gatewayIp | gateway_ip |
externalIp | external_ip |
internalNetmask | internal_netmask |
enableOutboundNat | enable_outbound_nat |
publicKey | public_key |
Supported manifest kinds
| Kind | Apply command | Description |
|---|---|---|
VM | kctl apply -f or kctl create vm -f | Create a virtual machine |
Network | kctl apply -f | Create a NAT, VXLAN, or bridge network |
SshKey | kctl apply -f | Register an SSH public key |
Container | kctl apply -f | Create an OCI container |
SecurityGroup | kctl apply -f or kctl sg apply -f | Create and attach firewall rules |
Applying manifests
Use kctl apply -f for all resource types. The kind field is auto-detected:
# Apply any manifest (kind auto-detected)
kctl apply -f vm.yaml
kctl apply -f network.yaml
kctl apply -f ssh-key.yaml
kctl apply -f container.yaml
kctl apply -f security-group.yaml
# Or use resource-specific create
kctl create vm -f vm.yaml
Manifests are the recommended way to manage kcore resources. They provide a reproducible, version-controllable description of your infrastructure.