Show examples as

vSAN with Ceph Planned

This feature is planned and not yet implemented. This page describes the vision and intended integration.

kcore currently supports local storage backends (filesystem, LVM, ZFS) where VM disks live on the host running the VM. vSAN will add distributed shared storage using Ceph, enabling VM disks to be replicated across nodes.

Vision

Planned integration

Target use cases

How it complements local backends

Local backends (filesystem, LVM, ZFS) remain the right choice for single-node, performance-sensitive workloads where raw I/O throughput matters most. vSAN is designed for workloads that prioritise availability over raw I/O performance.

Operators choose per-VM: a database that needs low-latency local NVMe stays on LVM, while a stateless application server uses Ceph-backed storage for easy failover.

Relationship to existing storage classes

vSAN will appear as a new storage class (ceph) alongside the existing filesystem, lvm, and zfs classes:

$ kctl get storage-class
NAME          NODES
filesystem    3
lvm           2
zfs           1
ceph          4

Nodes with Ceph OSDs will be listed under the ceph storage class. Creating a VM on Ceph storage:

kctl create vm \
  --storage-backend ceph \
  --storage-size-bytes 107374182400