KubeVirt: Difference between revisions

From David's Wiki
 
(10 intermediate revisions by the same user not shown)
Line 80: Line 80:
         devices:
         devices:
           disks:
           disks:
             - bootOrder: 1
             - name: cdrom
              bootOrder: 1
               cdrom:
               cdrom:
                 bus: sata
                 bus: sata  
              name: cdrom
             - name: maindrive
             - name: maindrive
               bootOrder: 1
               bootOrder: 2
               disk:
               disk:
                 bus: virtio
                 bus: virtio
             - name: windows-guest-tools
             - name: windows-guest-tools
               bootOrder: 2
               bootOrder: 3
               cdrom:
               cdrom:
                 bus: sata
                 bus: sata
           interfaces:
           interfaces:
             - masquerade: {}
             - name: nic-0
               model: e1000e
               model: virtio
               name: nic-0
               masquerade: {}
           sound:
           sound:
             name: audio
             name: audio
Line 109: Line 109:
           persistentVolumeClaim:
           persistentVolumeClaim:
             claimName: windows10vm-pvc
             claimName: windows10vm-pvc
         - containerDisk:
         - name: windows-guest-tools
          containerDisk:
             image: quay.io/kubevirt/virtio-container-disk
             image: quay.io/kubevirt/virtio-container-disk
          name: windows-guest-tools
</syntaxhighlight>
</syntaxhighlight>
}}
}}


==vGPU==
==vGPU==
===Intel vGPU===
===Intel GVT-g===
See [https://kubevirt.io/2021/intel-vgpu-kubevirt.html#fedora-workstation-prep intel-vgpu-kubevirt]<br>
See [https://kubevirt.io/2021/intel-vgpu-kubevirt.html#fedora-workstation-prep intel-vgpu-kubevirt] and [[Archwiki:Intel GVT-g]]. I don't recommend this due to [https://github.com/intel/gvt-linux/issues/153 stability issues].
{{hidden | Instructions |
{{hidden | Instructions |
;Setup the nodes
;Setup the nodes
Line 134: Line 134:
pci_id=$(sudo lspci | grep -oP '([\d:\.]+)(?=\sVGA)')
pci_id=$(sudo lspci | grep -oP '([\d:\.]+)(?=\sVGA)')
uuid1=$(uuidgen)
uuid1=$(uuidgen)
uuid2=$(uuidgen)


cat > ~/gvtg-enable.service << EOF
cat > ~/gvtg-enable.service << EOF
Line 143: Line 142:
Type=oneshot
Type=oneshot
ExecStart=/bin/sh -c "echo '${uuid1}' > /sys/devices/pci0000:00/0000:${pci_id}/mdev_supported_types/i915-GVTg_V5_8/create"
ExecStart=/bin/sh -c "echo '${uuid1}' > /sys/devices/pci0000:00/0000:${pci_id}/mdev_supported_types/i915-GVTg_V5_8/create"
ExecStart=/bin/sh -c "echo '${uuid2}' > /sys/devices/pci0000:00/0000:${pci_id}/mdev_supported_types/i915-GVTg_V5_8/create"
ExecStop=/bin/sh -c "echo '1' > /sys/devices/pci0000:00/0000:${pci_id}/${uuid1}/remove"
ExecStop=/bin/sh -c "echo '1' > /sys/devices/pci0000:00/0000:${pci_id}/${uuid1}/remove"
ExecStop=/bin/sh -c "echo '1' > /sys/devices/pci0000:00/0000:${pci_id}/${uuid2}/remove"
RemainAfterExit=yes
RemainAfterExit=yes


Line 154: Line 151:
sudo systemctl enable gvtg-enable --now
sudo systemctl enable gvtg-enable --now
</syntaxhighlight>
</syntaxhighlight>
Notes
* There are two sizes of vGPU you can create:
** <code>i915-GVTg_V5_4</code> supports up to 1920x1200
** <code>i915-GVTg_V5_8</code> supports up to 1024x768
** See <code>cat /sys/devices/pci0000:00/0000:00:02.0/mdev_supported_types/i915-GVTg_V5_8/description</code> to get a description of the vgpu.
** On my nodes, I can create a single <code>i915-GVTg_V5_4</code> or two <code>i915-GVTg_V5_8</code>
* See [[Archwiki:Intel GVT-g]] for details
;Add GPU to your VMs
;Add GPU to your VMs
<syntaxhighlight lang="yaml">
<syntaxhighlight lang="yaml">
Line 169: Line 175:
</syntaxhighlight>
</syntaxhighlight>
}}
}}
==Networking==
===Bridge (Macvtap)===
This allows your VM to get an IP from the host network.
# Install [https://github.com/k8snetworkplumbingwg/multus-cni multus-cni]
# Enable macvtap [https://kubevirt.io/user-guide/operations/activating_feature_gates/#how-to-activate-a-feature-gate instructions]
# Follow the remaining instructions [https://kubevirt.io/user-guide/virtual_machines/interfaces_and_networks/ here]


==Resources==
==Resources==
* [https://kubevirt.io/api-reference/master/ KubeVirt API]
* [https://kubevirt.io/2021/intel-vgpu-kubevirt.html#fedora-workstation-prep intel-vgpu-kubevirt]
* [https://kubevirt.io/2021/intel-vgpu-kubevirt.html#fedora-workstation-prep intel-vgpu-kubevirt]

Latest revision as of 16:09, 2 February 2024

KubeVirt lets you setup and manage virtual machines on your kubernetes cluster.

Getting Started

Background

KubeVirt creates two new types of resources on your cluster: VirtualMachine (vm) and VirtualMachineInstance (vmi). VirtualMachine defines how to create VMIs. VirtualMachineInstance represent a running virtual machine.

Similar to deployments-pods, you will typically not create VirtualMachineInstance manually. Instead you define VirtualMachine in your manifests and control them using virtctl. Then KubeVirt will automatically create VirtualMachineInstance.

Requirements

See requirements

  • You need a kubernetes cluster with kubectl set up.
  • You do not need to install qemu-kvm libvirt-daemon-system on the nodes.

Install KubeVirt

See installation

Install commands
# Get the latest version string
export VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v -- '-rc' | sort -r | head -1 | awk -F': ' '{print $2}' | sed 's/,//' | xargs)
echo $VERSION

# Deploy operator
kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-operator.yaml
# Deploy custom resources
kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-cr.yaml

# Install virtctl
VERSION=$(kubectl get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.observedKubeVirtVersion}")
ARCH=$(uname -s | tr A-Z a-z)-$(uname -m | sed 's/x86_64/amd64/') || windows-amd64.exe
echo ${ARCH}
curl -L -o virtctl https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-${ARCH}
chmod +x virtctl
sudo install virtctl /usr/local/bin

Creating a VM

Loading ISOs into the cluster

Windows

  • Deploy the example manifest below.
  • During the install, you will need to load the `viostor` driver.
  • After install, load the remaining drivers using device manager.
  • Then you can remove the cdrom and windows-guest-tools disks.
Example manifest
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: windows10vm-pvc
spec:
  storageClassName: ""
  volumeName: windows10vm-pv
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
---
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  name: windows10vm
spec:
  runStrategy: Manual
  template:
    metadata:
      labels:
        kubevirt.io/domain: win10vm1
    spec:
      terminationGracePeriodSeconds: 300
      domain:
        cpu:
          cores: 4
        resources:
          requests:
            memory: 8G
        devices:
          disks:
            - name: cdrom
              bootOrder: 1
              cdrom:
                bus: sata 
            - name: maindrive
              bootOrder: 2
              disk:
                bus: virtio
            - name: windows-guest-tools
              bootOrder: 3
              cdrom:
                bus: sata
          interfaces:
            - name: nic-0
              model: virtio
              masquerade: {}
          sound:
            name: audio
            model: ich9
      networks:
        - name: nic-0
          pod: {}
      volumes:
        - name: cdrom
          persistentVolumeClaim:
            claimName: iso-win10
        - name: maindrive
          persistentVolumeClaim:
            claimName: windows10vm-pvc
        - name: windows-guest-tools
          containerDisk:
            image: quay.io/kubevirt/virtio-container-disk

vGPU

Intel GVT-g

See intel-vgpu-kubevirt and Archwiki:Intel GVT-g. I don't recommend this due to stability issues.

Instructions
Setup the nodes

Run the following on each node with an Intel GPU.

# Enable kvmgt and iommu
sudo sh -c "echo kvmgt > /etc/modules-load.d/gpu-kvmgt.conf"
sudo sed -i 's/^GRUB_CMDLINE_LINUX_DEFAULT="/&intel_iommu=on i915.enable_gvt=1/'  /etc/default/grub
sudo update-grub
sudo reboot

# Check that kvmgt modules are loaded
sudo lsmod | grep kvmgt

# Create two vGPUs
pci_id=$(sudo lspci | grep -oP '([\d:\.]+)(?=\sVGA)')
uuid1=$(uuidgen)

cat > ~/gvtg-enable.service << EOF
[Unit]
Description=Create Intel GVT-g vGPU

[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo '${uuid1}' > /sys/devices/pci0000:00/0000:${pci_id}/mdev_supported_types/i915-GVTg_V5_8/create"
ExecStop=/bin/sh -c "echo '1' > /sys/devices/pci0000:00/0000:${pci_id}/${uuid1}/remove"
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
EOF
sudo mv ~/gvtg-enable.service /etc/systemd/system/gvtg-enable.service
sudo systemctl enable gvtg-enable --now

Notes

  • There are two sizes of vGPU you can create:
    • i915-GVTg_V5_4 supports up to 1920x1200
    • i915-GVTg_V5_8 supports up to 1024x768
    • See cat /sys/devices/pci0000:00/0000:00:02.0/mdev_supported_types/i915-GVTg_V5_8/description to get a description of the vgpu.
    • On my nodes, I can create a single i915-GVTg_V5_4 or two i915-GVTg_V5_8
  • See Archwiki:Intel GVT-g for details
Add GPU to your VMs
spec:
  template:
    spec:
      domain:
        devices:
          gpus:
            - deviceName: intel.com/U630
              name: gpu1
              virtualGPUOptions:
                display:
                  enabled: false

Networking

Bridge (Macvtap)

This allows your VM to get an IP from the host network.

  1. Install multus-cni
  2. Enable macvtap instructions
  3. Follow the remaining instructions here

Resources