One of the great leaps in recent years with virtualization is PCI Express device passthrough from host to guests – this essentially allows near native access and performance of PCIe devices in VMs and leads to experiences that feel like you are running two physical computers inside one. Since this technology has evolved and matured rapidly in the past couple of years in particular, there is a host of information on how one can enable this in Linux with KVM + QEMU + LibVirt and much of it is already outdated. This is my attempt at capturing the necessary steps for my particular setup as of November 2022.
The steps below will enable passthrough of a discrete NVidia GPU to the guest. The details of my particular setup are as follows:
- Operating System: Manjaro Linux, Kernel version 5.15
- QEmu version: 7.1.0
- Libvirt version: 8.9.0
- Motherboard: Gigabyte Z690 Chipset
- Processor: Intel 12th Gen I5 with iGPU
- GPU: NVidia 3060 12 GB
- Based operating system does have the proprietary NVidia drivers installed
The steps should work for any class of system that has a similar or later GPU + CPU + Motherboard IMO.
Step 1: You need to enable Vt-d in the UEFI settings. You also need to set your integrated GPU as the primary GPU in the UEFI settings. systemctl reboot --firmware-setup
allows you to drop into the UEFI settings without key smashing.
Step 2: Enable iommu and reboot. To do this, edit /etc/default/grub
add the following to GRUB_CMDLINE_LINUX_DEFAULT
– intel_iommu=on iommu=pt
. After every update of grub config, remember to run update-grub
and reboot for it to take effect.
Step 3: Identify the IOMMU groups for the GPU and the HDMI Audio device associated with it – run lspci -nnk
and look for VGA compatible controller and Audio device entries. Verify that they are in the same IOMMU group and that they are the only devices in that IOMMU group. Also make a note of the hex numbers within the square brackets which are its PCI IDs.
Step 4: Isolate the GPU and make sure we don’t bind any physical host drivers other than VFIO driver to it. To do this, we’ll pass the PCI IDs to VFIO binding in kernel command-line as well as load the VFIO modules early. Edit /etc/default/grub
and add vfio-pci.ids=<GPU PCI ID>,<HD Audio PCI ID>
to GRUB_CMDLINE_LINUX_DEFAULT
. Run update-grub
. Edit /etc/mkinitpcio.conf
and add vfio_pci vfio vfio_iommu_type1 vfio_virqfd
to MODULES
. Run mkinitpcio -p linux515
(last parameter depends on your distribution – check /etc/mkinitpcio.d
folder for figuring out what your config name is). Reboot.
Step 5: Check that vfio drivers are bound to your GPU and HD Audio devices. Run lspci -nnk and check that the kernel driver in use is vfio-pci for both.
Step 6: Modify your VM and add the GPU and HD Audio devices – I added them through the Virtual Manager interface. Boot your VM, enable Remote Desktop (or other means of connecting to it remotely – in case you need to debug changes not working), install the NVidia drivers. Using Device Manager in Windows at this point you should see the NVidia GPU. Reboot and see if you can see display output from the GPU – if so, congratulations, GPU passthrough is working!
Step 7: Cleanup your VM configuration by deleting unnecessary virtual hardware for display – any QXL adapter, tablet hardware etc.
This excellent Arch wiki on PCI passthrough and the Reddit r/vfio community are great resources if you have trouble in enabling PCI passthrough.
1 Reply to “GPU Passthrough of NVidia 3060 to Virtual Machine with Intel IGP for host”