\(
\newcommand{\P}[]{\unicode{xB6}}
\newcommand{\AA}[]{\unicode{x212B}}
\newcommand{\empty}[]{\emptyset}
\newcommand{\O}[]{\emptyset}
\newcommand{\Alpha}[]{Α}
\newcommand{\Beta}[]{Β}
\newcommand{\Epsilon}[]{Ε}
\newcommand{\Iota}[]{Ι}
\newcommand{\Kappa}[]{Κ}
\newcommand{\Rho}[]{Ρ}
\newcommand{\Tau}[]{Τ}
\newcommand{\Zeta}[]{Ζ}
\newcommand{\Mu}[]{\unicode{x039C}}
\newcommand{\Chi}[]{Χ}
\newcommand{\Eta}[]{\unicode{x0397}}
\newcommand{\Nu}[]{\unicode{x039D}}
\newcommand{\Omicron}[]{\unicode{x039F}}
\DeclareMathOperator{\sgn}{sgn}
\def\oiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x222F}\,}{\unicode{x222F}}{\unicode{x222F}}{\unicode{x222F}}}\,}\nolimits}
\def\oiiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x2230}\,}{\unicode{x2230}}{\unicode{x2230}}{\unicode{x2230}}}\,}\nolimits}
\)
KubeVirt lets you setup and manage virtual machines on your kubernetes cluster.
Getting Started
Background
KubeVirt creates two new types of resources on your cluster: VirtualMachine
(vm) and VirtualMachineInstance
(vmi).
VirtualMachine
defines how to create VMIs. VirtualMachineInstance
represent a running virtual machine.
Similar to deployments-pods, you will typically not create VirtualMachineInstance
manually.
Instead you define VirtualMachine
in your manifests and control them using virtctl
. Then KubeVirt will automatically create VirtualMachineInstance
.
Requirements
See requirements
You need a kubernetes cluster with kubectl
set up.
You do not need to install qemu-kvm libvirt-daemon-system
on the nodes.
Install KubeVirt
See installation
Expand Install commands
# Get the latest version string
export VERSION = $( curl -s https://api.github.com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v -- '-rc' | sort -r | head -1 | awk -F': ' '{print $2}' | sed 's/,//' | xargs)
echo $VERSION
# Deploy operator
kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${ VERSION } /kubevirt-operator.yaml
# Deploy custom resources
kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${ VERSION } /kubevirt-cr.yaml
# Install virtctl
VERSION = $( kubectl get kubevirt.kubevirt.io/kubevirt -n kubevirt -o= jsonpath = "{.status.observedKubeVirtVersion}" )
ARCH = $( uname -s | tr A-Z a-z) -$( uname -m | sed 's/x86_64/amd64/' ) || windows-amd64.exe
echo ${ ARCH }
curl -L -o virtctl https://github.com/kubevirt/kubevirt/releases/download/${ VERSION } /virtctl-${ VERSION } -${ ARCH }
chmod +x virtctl
sudo install virtctl /usr/local/bin
Creating a VM
Loading ISOs into the cluster
Windows
Deploy the example manifest below.
During the install, you will need to load the `viostor` driver.
After install, load the remaining drivers using device manager.
Then you can remove the cdrom and windows-guest-tools disks.
Expand Example manifest
apiVersion : v1
kind : PersistentVolumeClaim
metadata :
name : windows10vm-pvc
spec :
storageClassName : ""
volumeName : windows10vm-pv
accessModes :
- ReadWriteMany
resources :
requests :
storage : 100Gi
---
apiVersion : kubevirt.io/v1alpha3
kind : VirtualMachine
metadata :
name : windows10vm
spec :
runStrategy : Manual
template :
metadata :
labels :
kubevirt.io/domain : win10vm1
spec :
terminationGracePeriodSeconds : 300
domain :
cpu :
cores : 4
resources :
requests :
memory : 8G
devices :
disks :
- name : cdrom
bootOrder : 1
cdrom :
bus : sata
- name : maindrive
bootOrder : 2
disk :
bus : virtio
- name : windows-guest-tools
bootOrder : 3
cdrom :
bus : sata
interfaces :
- name : nic-0
model : virtio
masquerade : {}
sound :
name : audio
model : ich9
networks :
- name : nic-0
pod : {}
volumes :
- name : cdrom
persistentVolumeClaim :
claimName : iso-win10
- name : maindrive
persistentVolumeClaim :
claimName : windows10vm-pvc
- name : windows-guest-tools
containerDisk :
image : quay.io/kubevirt/virtio-container-disk
vGPU
Intel GVT-g
This is deprecated. Additionally, I don't recommend this due to stability issues .
Expand Instructions
See intel-vgpu-kubevirt and Archwiki:Intel GVT-g .
Setup the nodes
Run the following on each node with an Intel GPU.
# Enable kvmgt and iommu
sudo sh -c "echo kvmgt > /etc/modules-load.d/gpu-kvmgt.conf"
sudo sed -i 's/^GRUB_CMDLINE_LINUX_DEFAULT="/&intel_iommu=on i915.enable_gvt=1/' /etc/default/grub
sudo update-grub
sudo reboot
# Check that kvmgt modules are loaded
sudo lsmod | grep kvmgt
# Create two vGPUs
pci_id = $( sudo lspci | grep -oP '([\d:\.]+)(?=\sVGA)' )
uuid1 = $( uuidgen)
cat > ~/gvtg-enable.service << EOF
[Unit]
Description=Create Intel GVT-g vGPU
[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo '${uuid1}' > /sys/devices/pci0000:00/0000:${pci_id}/mdev_supported_types/i915-GVTg_V5_8/create"
ExecStop=/bin/sh -c "echo '1' > /sys/devices/pci0000:00/0000:${pci_id}/${uuid1}/remove"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
sudo mv ~/gvtg-enable.service /etc/systemd/system/gvtg-enable.service
sudo systemctl enable gvtg-enable --now
Notes
There are two sizes of vGPU you can create:
i915-GVTg_V5_4
supports up to 1920x1200
i915-GVTg_V5_8
supports up to 1024x768
See cat /sys/devices/pci0000:00/0000:00:02.0/mdev_supported_types/i915-GVTg_V5_8/description
to get a description of the vgpu.
On my nodes, I can create a single i915-GVTg_V5_4
or two i915-GVTg_V5_8
See Archwiki:Intel GVT-g for details
Add GPU to your VMs
spec :
template :
spec :
domain :
devices :
gpus :
- deviceName : intel.com/U630
name : gpu1
virtualGPUOptions :
display :
enabled : false
Networking
Bridge (Macvtap)
This allows your VM to get an IP from the host network.
Install multus-cni
Enable macvtap instructions
Follow the remaining instructions here
Resources