Declarative Virtualization
for Edge & Datacenters

kcore is a NixOS-based hypervisor that treats VMs as declarative resources. Define them in Terraform, manage them with a CLI, roll back when things break.

# Deploy a VM with kcore
resource "kcore_vm" "app" {
  name     = "production-app"
  memory   = 4096
  vcpu     = 4
  storage  = "50GB"
}

First cluster in one terminal demo

This asciinema shows the full first-run flow: install and approve your first node, add the operator SSH key, create a Debian 12 VM, discover its IP with describe, and SSH into it.

What you get

The things we actually built, not a wishlist

Atomic Updates & Rollback

NixOS under the hood means every system change is atomic. If an update breaks something, roll back in seconds. The host OS is immutable.

Declarative Infrastructure

VMs are defined in Terraform or YAML. Commit your infra to git, review it in a PR, apply it like any other config change.

MCP Server for AI Agents

kcore exposes an MCP server so AI coding agents (Cursor, Claude, etc.) can query and manage your VMs directly through the API.

gRPC API

Everything goes through a gRPC API. The CLI, the Terraform provider, and the MCP server all use the same interface. Build your own tooling on top.

Terraform Provider

A proper Terraform provider, not a wrapper around shell scripts. Plan, apply, and destroy VMs alongside the rest of your stack.

Fast Provisioning

VMs boot in seconds. No waiting around for cloud-init to finish or image downloads to complete. Local storage, local speed.

How it works

A controller manages state, node agents run VMs via libvirt, and you talk to it all through a single CLI.

  • Immutable OS

    Built on NixOS. The host filesystem is read-only. Updates swap the entire system atomically.

  • API-First

    gRPC all the way down. The CLI and Terraform provider are just API clients.

  • Multi-Node

    Add nodes by booting the ISO and pointing them at the controller. They register automatically.

  • Flexible Storage

    Local directories, LVM, or ZFS. Pick what makes sense for your hardware.

AI Infrastructure Assistant

Show me the current state of our infrastructure. Any issues?

Infrastructure Overview:

3 VMs running across 2 nodes
Total resources: 14 vCPUs, 14GB RAM
⚠️ Node-02: CPU usage at 78%
Storage: 128GB used / 500GB available

💡 Recommendation: Consider scaling node-02 or redistributing workload.

What's running on node-01?

Node-01 workloads:
• production-app (4GB, 4 vCPU) - running
• dev-web (2GB, 2 vCPU) - stopped

Current load: 32% CPU, 6GB/16GB RAM

kctl
$ kctl get vms
NAME            STATUS    NODE       MEMORY  VCPU
production-app  running   node-01    4096MB  4
staging-db      running   node-02    8192MB  8
dev-web         stopped   node-01    2048MB  2

$ kctl describe vm production-app
Name:        production-app
Status:      running
Node:        node-01
Memory:      4096 MB
vCPU:        4 cores
Storage:     50 GB (local-lvm)
Network:     192.168.122.100
Created:     2026-03-10 14:23:45

$ 
AI Agent Connected

Create a VM cluster with 3 nodes, 4GB RAM each, for my web application

I'll create a VM cluster for you. Here's the infrastructure code:

resource "kcore_vm" "web_cluster" {
  count    = 3
  name     = "web-${count.index + 1}"
  memory   = 4096
  vcpu     = 2
  storage  = "20GB"
  
  network {
    type = "vxlan"
    fabric = "evpn"
    segment = "tenant-default"
  }
}

Would you like me to deploy this?

Yes, deploy it

✓ Deploying 3 VMs...
✓ All VMs are now running

AI agents can manage your VMs

kcore ships with an MCP server. Connect it to Claude, Cursor, or any MCP-compatible agent and let it create and manage infrastructure for you.

MCP Server

The MCP server exposes VM lifecycle, node status, and resource info to any compatible AI agent over a standard protocol.

Terraform Generation

Ask an agent to set up infrastructure and it generates real Terraform HCL that you can review, version, and apply.

Observability

Agents can query node health, VM status, and resource usage to help you debug or plan capacity changes.

Declarative by Design

Define your infrastructure with clean, readable code

# Define a complete VM with storage and networking
resource "kcore_vm" "production" {
  name   = "prod-app-01"
  memory = 8192
  vcpu   = 4
  
  disk {
    size         = "100GB"
    storage_class = "local-lvm"
  }
  
  network {
    type = "vxlan"
    fabric = "evpn"
    segment = "tenant-prod"
  }
  
  tags = {
    environment = "production"
    team        = "platform"
  }
}
# vm.yaml - VM specification
apiVersion: v1
kind: VirtualMachine
metadata:
  name: prod-app-01
  labels:
    environment: production
    team: platform
spec:
  memory: 8192
  vcpu: 4
  disks:
    - size: 100GB
      storageClass: local-lvm
  network:
    type: vxlan
    fabric: evpn
    segment: tenant-prod
# Create VM using kctl
$ kctl create vm prod-app-01 \
  --memory 8192 \
  --vcpu 4 \
  --disk size=100GB,storage-class=local-lvm \
  --network type=vxlan,fabric=evpn,segment=tenant-prod \
  --label environment=production \
  --label team=platform

✓ VM prod-app-01 created successfully
✓ Starting VM...

# Apply from file
$ kctl apply -f vm.yaml

✓ VM prod-app-01 configured
✓ Status: running

Open core pricing for production virtualization

kcore keeps the core platform open source under Apache-2.0, while paid subscriptions provide the official production-ready distribution, stable updates, and support.

Community Edition is for labs, homelabs, and contributors. Standard and Premium are priced per physical CPU socket and are recommended for production.

Pricing and editions

Try it out

Community Edition is Apache-2.0: build from source or use project artifacts as published. For official signed ISOs, enterprise updates, and support, see Standard and Premium.