Show examples as

YAML manifest reference

kcore supports declarative resource creation via YAML manifests. Every resource that can be created with kctl CLI flags can also be expressed as a YAML manifest, making manifests the preferred way to manage infrastructure as code.

All manifest kinds are auto-detected and applied with a single command:

kctl apply -f <file.yaml>

You are browsing in CLI mode. This page is the YAML manifest reference—switch the toggle above to YAML for the intended view, or open the kctl CLI reference for commands only.

VM manifest (kind: VM)

FieldTypeDescription
kindstringMust be "VM"
metadata.namestringName of the VM
spec.cpuintegerNumber of vCPUs (default 2)
spec.memoryBytesstring | integerRAM size, e.g. "4G" or raw bytes
spec.storageBackendstringfilesystem | lvm | zfs
spec.storageSizeBytesinteger | stringRoot disk size in bytes or human-readable, e.g. "20G"
spec.targetNodestringPin VM to a specific node (optional)
spec.dcstringTarget datacenter for scheduling (optional, mutually exclusive with targetNode)
spec.sshKeysstring[]List of SSH key names to inject (must exist in the cluster)
spec.cloudInitUserDatastringFull cloud-init user-data (multi-line YAML block)
spec.disks[]arrayList of disk objects
spec.disks[].backendHandlestringURL or local path to disk image
spec.disks[].sha256stringHex-encoded SHA-256 checksum
spec.disks[].formatstringqcow2 | raw
spec.nics[]arrayList of network interfaces (supports multiple NICs)
spec.nics[].networkstringNetwork name (default "default")
spec.nics[].modelstringNIC model (default "virtio")
spec.imageSha256stringTop-level alternative to disks[].sha256
spec.imageFormatstringTop-level alternative to disks[].format

VM examples

LVM-backed VM with SSH key injection:

kind: VM
metadata:
  name: web-server
spec:
  cpu: 4
  memoryBytes: "4G"
  storageBackend: lvm
  storageSizeBytes: "20G"
  targetNode: kvm-node-01
  sshKeys:
    - my-deploy-key
  nics:
    - network: vxlan-prod
    - network: nat-mgmt
  disks:
    - image: https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2
      sha256: "<sha256>"
      format: qcow2

ZFS-backed VM with cloud-init:

kind: VM
metadata:
  name: db-server
spec:
  cpu: 8
  memoryBytes: "16G"
  storageBackend: zfs
  storageSizeBytes: "100G"
  dc: dc-eu-west
  sshKeys:
    - ops-team
  cloudInitUserData: |
    #cloud-config
    packages:
      - postgresql-15
    runcmd:
      - systemctl enable postgresql
      - systemctl start postgresql
  nics:
    - network: vxlan-prod
  disks:
    - image: https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2
      sha256: "<sha256>"
      format: qcow2

Network manifest (kind: Network)

FieldTypeDescription
kindstringMust be "Network"
metadata.namestringName of the network
spec.typestringnat | vxlan | bridge
spec.gatewayIpstringGateway IP for the internal network
spec.externalIpstringExternal IP for NAT/bridge (optional)
spec.internalNetmaskstringNetmask for internal subnet (default "255.255.255.0")
spec.vlanIdintegerVLAN tag (required for bridge networks)
spec.enableOutboundNatbooleanEnable outbound NAT masquerade (default true for nat/vxlan)
spec.targetNodestringPin network to a specific node (optional, NAT only)
spec.allowedTcpPortsinteger[]TCP ports to open on the network (optional)
spec.allowedUdpPortsinteger[]UDP ports to open on the network (optional)

Network examples

VXLAN overlay (multi-node):

kind: Network
metadata:
  name: vxlan-prod
spec:
  type: vxlan
  gatewayIp: 10.200.0.1
  internalNetmask: "255.255.255.0"
  enableOutboundNat: true

NAT network (single node):

kind: Network
metadata:
  name: nat-mgmt
spec:
  type: nat
  gatewayIp: 10.100.0.1
  internalNetmask: "255.255.255.0"
  targetNode: kvm-node-01

Bridge network with VLAN:

kind: Network
metadata:
  name: bridge-vlan100
spec:
  type: bridge
  vlanId: 100
  externalIp: 192.168.100.1
  gatewayIp: 192.168.100.1
  internalNetmask: "255.255.255.0"
  enableOutboundNat: false

SshKey manifest (kind: SshKey)

FieldTypeDescription
kindstringMust be "SshKey"
metadata.namestringName of the SSH key (referenced by VM manifests)
spec.publicKeystringThe full SSH public key string

Example

kind: SshKey
metadata:
  name: my-deploy-key
spec:
  publicKey: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIExample... ops@company.com"

Container manifest (kind: Container)

FieldTypeDescription
kindstringMust be "Container"
metadata.namestringName of the container
spec.imagestringOCI image reference (e.g. docker.io/library/nginx:latest)
spec.networkstringNetwork name to attach to (optional)
spec.portsstring[]Port mappings, e.g. "80:80"
spec.envmapEnvironment variables as key-value pairs
spec.commandstring[]Override container entrypoint command (optional)

Example

kind: Container
metadata:
  name: nginx-web
spec:
  image: docker.io/library/nginx:latest
  network: nat-mgmt
  ports:
    - "80:80"
    - "443:443"
  env:
    NGINX_HOST: example.com

SecurityGroup manifest (kind: SecurityGroup)

FieldTypeDescription
kindstringMust be "SecurityGroup"
metadata.namestringName of the security group
spec.descriptionstringHuman-readable description
spec.rules[]arrayList of firewall rules
spec.rules[].protocolstringtcp | udp
spec.rules[].hostPortintegerPort on the host side
spec.rules[].targetPortintegerPort inside the VM/container (optional, defaults to hostPort)
spec.rules[].sourceCidrstringAllowed source CIDR (default "0.0.0.0/0")
spec.rules[].targetVmstringTarget VM name (optional)
spec.rules[].enableDnatbooleanEnable DNAT for this rule
spec.attachments[]arrayResources this group is attached to
spec.attachments[].kindstringvm | network
spec.attachments[].targetstringName or ID of the target resource
spec.attachments[].targetNodestringNode name (required when kind is network)

Example

kind: SecurityGroup
metadata:
  name: web-sg
spec:
  description: Allow HTTP and HTTPS traffic
  rules:
    - protocol: tcp
      hostPort: 80
      targetPort: 80
      sourceCidr: "0.0.0.0/0"
    - protocol: tcp
      hostPort: 443
      targetPort: 443
      sourceCidr: "0.0.0.0/0"
      enableDnat: true
      targetVm: web-01
  attachments:
    - kind: vm
      target: web-01

Field naming

Both camelCase (recommended) and snake_case are accepted for most fields:

camelCase (preferred)snake_case alternative
storageBackendstorage_backend
storageSizeBytesstorage_size_bytes
targetNodetarget_node
targetDctarget_dc
sshKeysssh_keys
cloudInitUserDatacloud_init_user_data
backendHandlebackend_handle
imageSha256image_sha256
imageFormatimage_format
gatewayIpgateway_ip
externalIpexternal_ip
internalNetmaskinternal_netmask
enableOutboundNatenable_outbound_nat
publicKeypublic_key

Supported manifest kinds

KindApply commandDescription
VMkctl apply -f or kctl create vm -fCreate a virtual machine
Networkkctl apply -fCreate a NAT, VXLAN, or bridge network
SshKeykctl apply -fRegister an SSH public key
Containerkctl apply -fCreate an OCI container
SecurityGroupkctl apply -f or kctl sg apply -fCreate and attach firewall rules

Applying manifests

Use kctl apply -f for all resource types. The kind field is auto-detected:

# Apply any manifest (kind auto-detected)
kctl apply -f vm.yaml
kctl apply -f network.yaml
kctl apply -f ssh-key.yaml
kctl apply -f container.yaml
kctl apply -f security-group.yaml

# Or use resource-specific create
kctl create vm -f vm.yaml

Manifests are the recommended way to manage kcore resources. They provide a reproducible, version-controllable description of your infrastructure.