GPU Passthrough
GPU passthrough is a way to expose a GPU as a device directly to a virtual machine. This feature is provided by the I/O Memory Management Unit (IOMMU).
Enabling GPU passthrough on Linux
- Check for virtualization support
lscpu | grep Virtualization
- Enable IOMMU in BIOS: Ensure Intel VT-d is enabled in BIOS (under "VT-d" or "Virtualization technology")
- Enable IOMMU w/ kernel parameter:
- intel:
intel_iommu=on
- amd:
amd_iommu=on
- also append the
iommu=pt
parameter
- intel:
- Check if iommu is enabled with command
$ dmesg | grep -i -e DMAR -e IOMMU
Check if iommu is enabled with script:
#!/bin/bash shopt -s nullglob for g in /sys/kernel/iommu_groups/*; do echo "IOMMU Group ${g##*/}:" for d in $g/devices/*; do echo -e "\t$(lspci -nns ${d##*/})" done; done;
- Get Device IDs with
$ lspci -nn
- GPU: [1002:67df]
- AUD: [1002:aaf0]
- Edit
/etc/modprobe.d/vfio.conf
:options vfio-pci ids=1002:67df,1002:aaf0
- Or specify kernel parameters in
/etc/default/grub:
vfio-pci.ids=10de:13c2,10de:0fbb
Update grub withgrub-mkconfig -o /boot/grub/grub.cfg
- Edit
/etc/mkinitcpio.conf
MODULES=(vfio vfio_iommu_type1 vfio_pci vfio_virqfd nls_cp437 vfat)
HOOKS=(... modconf ...)
- regenerate initramfs with either command:
$ mkinitcpio -g /boot/linux-custom.img
$ mkinitcpio -p linux
- check if "vfio-pci" is under 'Kernel driver in use' (after reboot)
$ lspci -nnk
- Get following AUR packages:
libvirt virt-manager ovmf qemu
- Configure ovmf in
/etc/libvirt/qemu.conf
& add the path to your OVMF firmware imagenvram = ["/usr/share/ovmf/ovmf_code_x64.bin:/usr/share/ovmf/ovmf_vars_x64.bin"]
- Services to start & enable
$ service start libvirtd.service
$ service start virtlogd.socket
$ service enable libvirtd.service
$ service enable virtlogd.socket
- Configure VM
Windows on QEMU
For hang up on boot before disk decryption, see https://bbs.archlinux.org/viewtopic.php?pid=2070655#p2070655
Install virtio on existing windows drive (doesn't seem to work with w10) https://superuser.com/questions/342719/how-to-boot-a-physical-windows-partition-with-qemu
How to create a qemu w10 img and install windows to it https://bbs.archlinux.org/viewtopic.php?id=277584
QEMU command for a W10 VM w/ GPU passthrough
sudo qemu-system-x86_64 \ -enable-kvm `# enable hyper-v enlightenments` \ -L . `# directory where bios.bin is` \ --bios bios.bin `# ?` \ -device qemu-xhci `# usb 3.0 support (?)` \ -device usb-tablet `# ?` \ -m 20G `# gb of ram` \ -cpu host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time `# emulate exact host cpu` \ -machine type=q35,accel=kvm `# use KVM acceleration ` \ -smp $(nproc) `# use all available CPU cores` \ -mem-prealloc `# preallocate assigned memory` \ `#-balloon none # no memory ballooning. (deprecated)` \ `#-vga none` \ `#-nographic` \ -device vfio-pci,host=02:00.0,multifunction=on `# use gpu passthrough` \ -device vfio-pci,host=02:00.1 `# gpu audio passthrough` \ -usb `# usb devices to passthrough` \ -device usb-host,vendorid=0x046d,productid=0xc01e,id=mouse `# MX518` \ -device usb-host,vendorid=0x046d,productid=0xc312,id=keyboard `# DeLuxe 250` \ -device usb-host,vendorid=0x046d,productid=0xc52b,id=touchpad `# Logitech K400` \ `#-audiodev alsa,id=ad0` \ `#-device ich9-intel-hda` \ `#-device hda-duplex,audiodev=ad0` \ -drive file=/dev/nvme0n1,format=raw,media=disk `# w10 disk drive` \ -monitor stdio `# start qemu shell`
Control via VNC
# connect via ssh with trusted X11 forwarding ssh -Y user@192.168.178.123 vncviewer :5900 # port from qemu
GPU Passthrough without restarting
see https://github.com/bung69/Dell7710_GPU_Passthrough/blob/main/GPU.sh
VFIO to NVIDIA
# unbind a vfio_pci binded NVIDIA gpu # execute as root from a TTY # unbind nvidia gpu at 0000:01:00.0 from its driver # echo 0000:01:00.0 > /sys/bus/pci/devices/0000:01:00.0/driver/unbind # bind nvidia gpu to vfio-pci (maybe load if before) # echo "vfio-pci" > /sys/bus/pci/devices/0000\:01\:00.0/driver_override # unload vfio modules modprobe -r vfio-pci modprobe -r vfio_iommu_type1 modprobe -r vfio # load nvidia and sound modprobe -vv nvidia modprobe -vv snd_hda_intel
NVIDIA to VFIO
# unload nvidia modules modprobe -r snd_hda_intel modprobe -r nvidia_drm modprobe -r nvidia # load vfio modules modprobe -vv vfio modprobe -vv intel_iommu modprobe -vv vfio_iommu_type1 modprobe -vv vfio-pci # options vfio-pci ids=10de:1c03,10de:10f1
Enable and use hugepages
# see if hugepages are enabled cat /sys/kernel/mm/transparent_hugepage/enabled # should be 2MB grep Hugepagesize /proc/meminfo # add to /etc/fstab: # hugetlbfs /dev/hugepages hugetlbfs mode=01770,gid=kvm 0 0 # remount to take effect systemctl daemon-reload umount /dev/hugepages/ mount /dev/hugepages mount | grep huge # set number of hugepages to use (HP_NR * 2MB = memory_used) echo HP_NR > /proc/sys/vm/nr_hugepages # check if number of hugepages is correct (may be smaller than what was set) grep HugePages_Total /proc/meminfo # add `-mem-path /dev/hugepages` to the qemu command # get info on total vs free huge pages grep HugePages /proc/meminfo # amount of huge pages used globally grep AnonHugePages /proc/meminfo # while qemu is running with PID, get number of used huge pages grep -P 'AnonHugePages:\s+(?!0)\d+' /proc/PID/smaps # to enable after reboot, create & add to /etc/sysctl.d/40-hugepage.conf: # vm.nr_hugepages = 550
Enable use of hugepages in libvirt see https://help.ubuntu.com/community/KVM%20-%20Using%20Hugepages
<memoryBacking> <hugepages/> </memoryBacking>
Disable and free hugepages
See https://gitlab.com/Karuri/vfio#cpu-pinning
#!/bin/bash echo 0 > /proc/sys/vm/nr_hugepages
CPU pinning & isolation
https://passthroughtools.org/cpupin/
# /etc/libvirt/hooks/qemu # -------------------- #!/bin/sh command=$2 if [ "$command" = "started" ]; then systemctl set-property --runtime -- system.slice AllowedCPUs=0,1,6,7 systemctl set-property --runtime -- user.slice AllowedCPUs=0,1,6,7 systemctl set-property --runtime -- init.scope AllowedCPUs=0,1,6,7 elif [ "$command" = "release" ]; then systemctl set-property --runtime -- system.slice AllowedCPUs=0-11 systemctl set-property --runtime -- user.slice AllowedCPUs=0-11 systemctl set-property --runtime -- init.scope AllowedCPUs=0-11 fi
# see cpu topology (hyperthreads share core id) lscpu -e # better way to see all thread pairs cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort -h | uniq
Isolation using SystemD
# run before starting vm systemctl set-property --runtime -- user.slice AllowedCPUs=0,4 systemctl set-property --runtime -- system.slice AllowedCPUs=0,4 systemctl set-property --runtime -- init.scope AllowedCPUs=0,4 # undo isolation after done systemctl set-property --runtime -- user.slice AllowedCPUs=0-11 systemctl set-property --runtime -- system.slice AllowedCPUs=0-11 systemctl set-property --runtime -- init.scope AllowedCPUs=0-11