vSAN with Ceph Planned
This feature is planned and not yet implemented. This page describes the vision and intended integration.
kcore currently supports local storage backends (filesystem, LVM, ZFS) where VM disks live on the host running the VM. vSAN will add distributed shared storage using Ceph, enabling VM disks to be replicated across nodes.
Vision
- Distributed block storage across cluster nodes.
- Ceph OSDs run on data disks, managed by the controller.
- RBD (RADOS Block Device) volumes serve as VM disks.
- Storage is replicated across nodes for redundancy.
Planned integration
- Ceph OSDs deployed on data disks at install time using a new
--storage-backend cephflag. - Controller manages pool topology and replication factor.
- RBD volumes created and attached as VM disks transparently.
- CRUSH map aligned with DC topology (
dc_id) so replicas span failure domains.
Target use cases
- HA VM storage — survive single-node failure without data loss.
- Live migration — VMs can move between nodes without copying disks.
- Multi-node data redundancy — configurable replication factor (e.g. 2 or 3 copies).
- Shared image cache — VM base images stored once in Ceph, cloned instantly.
How it complements local backends
Local backends (filesystem, LVM, ZFS) remain the right choice for single-node, performance-sensitive workloads where raw I/O throughput matters most. vSAN is designed for workloads that prioritise availability over raw I/O performance.
Operators choose per-VM: a database that needs low-latency local NVMe stays on LVM, while a stateless application server uses Ceph-backed storage for easy failover.
Relationship to existing storage classes
vSAN will appear as a new storage class (ceph) alongside the existing filesystem, lvm, and zfs classes:
$ kctl get storage-class
NAME NODES
filesystem 3
lvm 2
zfs 1
ceph 4
Nodes with Ceph OSDs will be listed under the ceph storage class. Creating a VM on Ceph storage:
kctl create vm \
--storage-backend ceph \
--storage-size-bytes 107374182400