KubeVirt: Difference between revisions

From David's Wiki
Created page with "KubeVirt lets you setup and manage virtual machines on your kubernetes cluster. ==Install== See [https://kubevirt.io/user-guide/operations/installation/ installation] <syntaxhighlight lang="bash"> # Get the latest version string export VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v -- '-rc' | sort -r | head -1 | awk -F': ' '{print $2}' | sed 's/,//' | xargs) echo $VERSION # Deploy operator kubectl create -f https://g..."
 
 
(19 intermediate revisions by the same user not shown)
Line 1: Line 1:
KubeVirt lets you setup and manage virtual machines on your kubernetes cluster.
KubeVirt lets you setup and manage virtual machines on your kubernetes cluster.


==Install==
==Getting Started==
===Background===
KubeVirt creates two new types of resources on your cluster: <code>VirtualMachine</code> (vm) and <code>VirtualMachineInstance</code> (vmi).
<code>VirtualMachine</code> defines how to create VMIs. <code>VirtualMachineInstance</code> represent a running virtual machine.
 
Similar to deployments-pods, you will typically not create <code>VirtualMachineInstance</code> manually.
Instead you define <code>VirtualMachine</code> in your manifests and control them using <code>virtctl</code>. Then KubeVirt will automatically create <code>VirtualMachineInstance</code>.
 
===Requirements===
See [https://kubevirt.io/user-guide/operations/installation/#requirements requirements]<br>
* You need a kubernetes cluster with <code>kubectl</code> set up.
* You do '''not''' need to install <code>qemu-kvm libvirt-daemon-system</code> on the nodes.
 
===Install KubeVirt===
See [https://kubevirt.io/user-guide/operations/installation/ installation]
See [https://kubevirt.io/user-guide/operations/installation/ installation]
{{hidden | Install commands |
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# Get the latest version string
# Get the latest version string
Line 21: Line 35:
sudo install virtctl /usr/local/bin
sudo install virtctl /usr/local/bin
</syntaxhighlight>
</syntaxhighlight>
}}
===Creating a VM===
====Loading ISOs into the cluster====
====Windows====
* Deploy the example manifest below.
* During the install, you will need to load the `viostor` driver.
* After install, load the remaining drivers using device manager.
* Then you can remove the ''cdrom'' and ''windows-guest-tools'' disks.
{{hidden | Example manifest |
<syntaxhighlight lang="yaml">
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: windows10vm-pvc
spec:
  storageClassName: ""
  volumeName: windows10vm-pv
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
---
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  name: windows10vm
spec:
  runStrategy: Manual
  template:
    metadata:
      labels:
        kubevirt.io/domain: win10vm1
    spec:
      terminationGracePeriodSeconds: 300
      domain:
        cpu:
          cores: 4
        resources:
          requests:
            memory: 8G
        devices:
          disks:
            - name: cdrom
              bootOrder: 1
              cdrom:
                bus: sata
            - name: maindrive
              bootOrder: 2
              disk:
                bus: virtio
            - name: windows-guest-tools
              bootOrder: 3
              cdrom:
                bus: sata
          interfaces:
            - name: nic-0
              model: virtio
              masquerade: {}
          sound:
            name: audio
            model: ich9
      networks:
        - name: nic-0
          pod: {}
      volumes:
        - name: cdrom
          persistentVolumeClaim:
            claimName: iso-win10
        - name: maindrive
          persistentVolumeClaim:
            claimName: windows10vm-pvc
        - name: windows-guest-tools
          containerDisk:
            image: quay.io/kubevirt/virtio-container-disk
</syntaxhighlight>
}}
==vGPU==
===Intel GVT-g===
See [https://kubevirt.io/2021/intel-vgpu-kubevirt.html#fedora-workstation-prep intel-vgpu-kubevirt] and [[Archwiki:Intel GVT-g]]. I don't recommend this due to [https://github.com/intel/gvt-linux/issues/153 stability issues].
{{hidden | Instructions |
;Setup the nodes
Run the following on each node with an Intel GPU.
<syntaxhighlight lang="bash">
# Enable kvmgt and iommu
sudo sh -c "echo kvmgt > /etc/modules-load.d/gpu-kvmgt.conf"
sudo sed -i 's/^GRUB_CMDLINE_LINUX_DEFAULT="/&intel_iommu=on i915.enable_gvt=1/'  /etc/default/grub
sudo update-grub
sudo reboot
# Check that kvmgt modules are loaded
sudo lsmod | grep kvmgt
# Create two vGPUs
pci_id=$(sudo lspci | grep -oP '([\d:\.]+)(?=\sVGA)')
uuid1=$(uuidgen)
cat > ~/gvtg-enable.service << EOF
[Unit]
Description=Create Intel GVT-g vGPU
[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo '${uuid1}' > /sys/devices/pci0000:00/0000:${pci_id}/mdev_supported_types/i915-GVTg_V5_8/create"
ExecStop=/bin/sh -c "echo '1' > /sys/devices/pci0000:00/0000:${pci_id}/${uuid1}/remove"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
sudo mv ~/gvtg-enable.service /etc/systemd/system/gvtg-enable.service
sudo systemctl enable gvtg-enable --now
</syntaxhighlight>
Notes
* There are two sizes of vGPU you can create:
** <code>i915-GVTg_V5_4</code> supports up to 1920x1200
** <code>i915-GVTg_V5_8</code> supports up to 1024x768
** See <code>cat /sys/devices/pci0000:00/0000:00:02.0/mdev_supported_types/i915-GVTg_V5_8/description</code> to get a description of the vgpu.
** On my nodes, I can create a single <code>i915-GVTg_V5_4</code> or two <code>i915-GVTg_V5_8</code>
* See [[Archwiki:Intel GVT-g]] for details
;Add GPU to your VMs
<syntaxhighlight lang="yaml">
spec:
  template:
    spec:
      domain:
        devices:
          gpus:
            - deviceName: intel.com/U630
              name: gpu1
              virtualGPUOptions:
                display:
                  enabled: false
</syntaxhighlight>
}}
==Networking==
===Bridge (Macvtap)===
This allows your VM to get an IP from the host network.
# Install [https://github.com/k8snetworkplumbingwg/multus-cni multus-cni]
# Enable macvtap [https://kubevirt.io/user-guide/operations/activating_feature_gates/#how-to-activate-a-feature-gate instructions]
# Follow the remaining instructions [https://kubevirt.io/user-guide/virtual_machines/interfaces_and_networks/ here]
==Resources==
* [https://kubevirt.io/api-reference/master/ KubeVirt API]
* [https://kubevirt.io/2021/intel-vgpu-kubevirt.html#fedora-workstation-prep intel-vgpu-kubevirt]

Latest revision as of 16:09, 2 February 2024

KubeVirt lets you setup and manage virtual machines on your kubernetes cluster.

Getting Started

Background

KubeVirt creates two new types of resources on your cluster: VirtualMachine (vm) and VirtualMachineInstance (vmi). VirtualMachine defines how to create VMIs. VirtualMachineInstance represent a running virtual machine.

Similar to deployments-pods, you will typically not create VirtualMachineInstance manually. Instead you define VirtualMachine in your manifests and control them using virtctl. Then KubeVirt will automatically create VirtualMachineInstance.

Requirements

See requirements

  • You need a kubernetes cluster with kubectl set up.
  • You do not need to install qemu-kvm libvirt-daemon-system on the nodes.

Install KubeVirt

See installation

Install commands
# Get the latest version string
export VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v -- '-rc' | sort -r | head -1 | awk -F': ' '{print $2}' | sed 's/,//' | xargs)
echo $VERSION

# Deploy operator
kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-operator.yaml
# Deploy custom resources
kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-cr.yaml

# Install virtctl
VERSION=$(kubectl get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.observedKubeVirtVersion}")
ARCH=$(uname -s | tr A-Z a-z)-$(uname -m | sed 's/x86_64/amd64/') || windows-amd64.exe
echo ${ARCH}
curl -L -o virtctl https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-${ARCH}
chmod +x virtctl
sudo install virtctl /usr/local/bin

Creating a VM

Loading ISOs into the cluster

Windows

  • Deploy the example manifest below.
  • During the install, you will need to load the `viostor` driver.
  • After install, load the remaining drivers using device manager.
  • Then you can remove the cdrom and windows-guest-tools disks.
Example manifest
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: windows10vm-pvc
spec:
  storageClassName: ""
  volumeName: windows10vm-pv
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
---
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  name: windows10vm
spec:
  runStrategy: Manual
  template:
    metadata:
      labels:
        kubevirt.io/domain: win10vm1
    spec:
      terminationGracePeriodSeconds: 300
      domain:
        cpu:
          cores: 4
        resources:
          requests:
            memory: 8G
        devices:
          disks:
            - name: cdrom
              bootOrder: 1
              cdrom:
                bus: sata 
            - name: maindrive
              bootOrder: 2
              disk:
                bus: virtio
            - name: windows-guest-tools
              bootOrder: 3
              cdrom:
                bus: sata
          interfaces:
            - name: nic-0
              model: virtio
              masquerade: {}
          sound:
            name: audio
            model: ich9
      networks:
        - name: nic-0
          pod: {}
      volumes:
        - name: cdrom
          persistentVolumeClaim:
            claimName: iso-win10
        - name: maindrive
          persistentVolumeClaim:
            claimName: windows10vm-pvc
        - name: windows-guest-tools
          containerDisk:
            image: quay.io/kubevirt/virtio-container-disk

vGPU

Intel GVT-g

See intel-vgpu-kubevirt and Archwiki:Intel GVT-g. I don't recommend this due to stability issues.

Instructions
Setup the nodes

Run the following on each node with an Intel GPU.

# Enable kvmgt and iommu
sudo sh -c "echo kvmgt > /etc/modules-load.d/gpu-kvmgt.conf"
sudo sed -i 's/^GRUB_CMDLINE_LINUX_DEFAULT="/&intel_iommu=on i915.enable_gvt=1/'  /etc/default/grub
sudo update-grub
sudo reboot

# Check that kvmgt modules are loaded
sudo lsmod | grep kvmgt

# Create two vGPUs
pci_id=$(sudo lspci | grep -oP '([\d:\.]+)(?=\sVGA)')
uuid1=$(uuidgen)

cat > ~/gvtg-enable.service << EOF
[Unit]
Description=Create Intel GVT-g vGPU

[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo '${uuid1}' > /sys/devices/pci0000:00/0000:${pci_id}/mdev_supported_types/i915-GVTg_V5_8/create"
ExecStop=/bin/sh -c "echo '1' > /sys/devices/pci0000:00/0000:${pci_id}/${uuid1}/remove"
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
EOF
sudo mv ~/gvtg-enable.service /etc/systemd/system/gvtg-enable.service
sudo systemctl enable gvtg-enable --now

Notes

  • There are two sizes of vGPU you can create:
    • i915-GVTg_V5_4 supports up to 1920x1200
    • i915-GVTg_V5_8 supports up to 1024x768
    • See cat /sys/devices/pci0000:00/0000:00:02.0/mdev_supported_types/i915-GVTg_V5_8/description to get a description of the vgpu.
    • On my nodes, I can create a single i915-GVTg_V5_4 or two i915-GVTg_V5_8
  • See Archwiki:Intel GVT-g for details
Add GPU to your VMs
spec:
  template:
    spec:
      domain:
        devices:
          gpus:
            - deviceName: intel.com/U630
              name: gpu1
              virtualGPUOptions:
                display:
                  enabled: false

Networking

Bridge (Macvtap)

This allows your VM to get an IP from the host network.

  1. Install multus-cni
  2. Enable macvtap instructions
  3. Follow the remaining instructions here

Resources