LibVF.IO Setup Guide

From Open-IOV
Jump to navigation Jump to search

Commodity GPU Multiplexing Driven By VFIO & YAML.

This page has been mirrored from an existing installation document. Reformatting is required and some information including images is missing. Until this page has been fixed use the original installation guide instead.

Abstract

The following document describes the reasoning behind the creation of LibVF.IO, some basic definitions relating to GPU virtualization today, and a guide to using LibVF.IO for multiplexing vendor neutral commodity GPU devices via YAML

Introduction

Today if you want to run an operating system other than Windows but you'd like to take your Windows programs along with you you're not likely to have an entirely seamless experience.

Tools like WINE and Valve's Proton + DXVK have provided a mechanism for Windows applications to run in MacOS and Linux environments. As the title would suggest WINE (Wine Is Not Emulation) is not an emulator, rather it provides an environment that approximates the Windows ABI (Application Binary Interface) to support Win32/Win64 applications in unsupported environments.

This approach has seen long adoption by Linux desktop users and has gained traction with the incorporation of official support in Valve's Steam game distribution platform however despite the vast energies of the WINE community across decades of work Microsoft still manages to introduce breaking changes to it's libraries and APIs which are often incorporated in newly released games causing either degraded application performance under WINE or entirely broken compatibility.

LibVF.IO addresses these problems by running real Windows in a virtual machine with native GPU performance. We do this by running an unmodified guest GPU driver with native hardware interfaces. This ensures that changes to Windows will not break or otherwise degrade compatibility with programs running under a compatibility layer such as WINE or Proton.

LibVF.IO is part of an ongoing effort to remedy architectural problems in operating systems today as detailed in this post which you can read here. We attempt to create a simplified mechanism with perfect forward compatibility and full performance for users to interact with binaries whose ABI (Application Binary Interface) is foreign to the host environment while retaining full performance. We will post more on this subject in the future.

What can I do with LibVF.IO?

This section will cover what you can do today using LibVF.IO.

What are some of the current problems with VFIO on commodity GPU hardware?

Today most VFIO functionality running on commodity GPU hardware involves full passthrough of a discrete physical GPU to a single virtual machine. This approach has proven to be a useful way for enthusiasts to run GPU accelerated virtual machines at home but with the obvious asterisk attached that both the host system and the guest virtual machine require their own discrete physical GPUs to work. This means that in order for this setup to work users must own two discrete physical GPUs both of which are installed in the same system which does not reflect the hardware a large number of computer users own today.

What problems does LibVF.IO solve?

LibVF.IO automates the creation and management of mediated devices (partitioned commodity GPUs shared by host & guest), identifying NUMA nodes, parsing / managing IOMMU devices, and allocating virtual functions to virtual machines.

Quick Start

The following section will provide an overview of getting started using LibVF.IO to create highly performant GPU passthrough'd virtual machines using a single graphics card.

What you'll need

Here we'll cover what you'll need to get up and running.

Ubuntu 20.04 ISO

You'll need an installer ISO for the host operating system. Right now we're supporting Ubuntu 20.04 Desktop images. You can download the latest Ubuntu 20.04 Desktop ISO image here.

Once you've downloaded the ISO file you should create an installer USB. If you haven't installed an OS before using a USB and an ISO file tools like RUFUS on Windows or belenaEtcher on MacOS work great for applying the ISO boot image to the USB disk.

If you haven't installed a Linux operating system on your computer before here's a step by step guide to installing Ubuntu 20.04 Desktop with screenshots.

LibVF.IO

Once you've installed your host operating system (Ubuntu 20.04 Desktop) next go to https://libvf.io and clone our repo.

It's okay if you haven't used git before. You can download the files as a zip.

Once you've downloaded the zip file you'll want to extract it to a working directory on your Ubuntu 20.04 Desktop system.

Windows 10 ISO

You'll also need an installer ISO for a Windows 10 virtual machine. We recommend Windows 10 LTSC for the best experience due to it's more predictable updates but this will work with any version of Windows 10. Depending on where you're downloading the .iso file from you may have several options. If you see x64, AMD64, or x86_64 pick that file - all of these mean it's the 64-bit version which is the one you want. You will need a valid Windows 10 license to install your virtual machine.

Host Mdev GPU Driver

Pick the driver version that matches your graphics vendor.

Nvidia Merged Driver

Note to users of existing OS installs: If you are not running this setup on a fresh operating system install and have installed Nvidia's proprietary driver package please make sure to uninstall your existing Nvidia drivers before attempting to install LibVF.IO.

Note to users of consumer Ampere GPUs: Right now we don't have plans to announce support for SR-IOV functionality on consumer Ampere SKUs prior to the release of Nvidia's Ada Lovelace architecture.

You can find the Nvidia Merged driver download link in the vGPU community wiki here. Scroll down to section 6 and click the latest "merged driver" download link. Once you've finished the download then place the .run file in the libvf.io /optional/ directory before running the installer script. That's it!

Intel i915 Driver

Instructions on the installation of Intel's bleeding edge virtualization drivers can be found here for use with 6th to 9th generation silicon (GVT-g).

Intel's i915 Driver on 11th generation silicon featuring Intel Xe graphics (SR-IOV) has not yet seen the inclusion of SR-IOV features to the driver despite shipping products with silicon support for SR-IOV and SR-IOV resources exposed in the kernel by the device. If you are so inclined you can join the Intel-GFX mailing list here and ask Intel's graphics driver developers for a status update on the inclusion of SR-IOV support in the Intel i915 driver.

AMD GPU-IOV Module Driver

AMD's GPU-IOV Module (GIM) can be downloaded and compiled from here. It is possible to run this code on a limited number of commodity GPU devices which may be modified to enable the relevant APIs. You can read more about the modification process here. While this approach is entirely workable today, there are a variety of downsides to AMD GPU devices for use with virtualization. Those reasons are as follows:

  • The latest AMD GPU that this software runs on is AMD's Tonga architecture S7150 which was end of life (EOL) in 2017.
  • AMD has produced other MxGPU capable GPUs which they refuse to publicly release open source driver code for.
  • AMD refuses to support their current open source code. In order to use their currently available open source code you will need to checkout pull request 24 which makes GPU-IOV Module usable on modern kernel versions. You can see the relevant pull request link here.

    It is for these reasons that we do not recommend the use of AMD GPU devices for virtualization purposes at this time.

    We remain hopeful that AMD will recognize forthcoming changes in GPU virtualization with the creation of open standards such as Auxiliary Domains (AUX Domains), Mdev (VFIO-Mdev developed by Nvidia, RedHat, and Intel), and Alternative Routing-ID Interpretation (ARI) especially in light of Intel's market entrance with their Xe and ARC line of GPUs supporting SR-IOV. We encourage AMD to reconsider it's stance on calls for increased openness & cooperation with the open source GPU virtualization community.

Run the install script & reboot

Now that you've installed your host OS and have downloaded the appropriate GPU driver files you can now run the install script. Open a terminal on your freshly installed Ubuntu 20.04 Desktop host system and navigate to the libvf.io directory you downloaded earlier.

Run the following command from within the libvf.io directory (don't change directory into the scripts directory):

$ ./scripts/install-libvfio.sh

If you are using a system with an Nvidia based GPU and have placed the optional driver file in the libvf.io /optional/ directory then the installation script will prompt you to reboot your system after it has disabled Ubuntu's default Nouveau GPU driver. After you have restarted your system you'll notice that the screen resolution will be reduced - don't worry, that's part of the installation process. Now that you've rebooted your machine go back to the libvf.io directory and run the same command again:

$ ./scripts/install-libvfio.sh

This will continue the installation process from where you left off.

If you are using an Nvidia GPU you should have placed the merged driver file .run inside of the libvf.io /optional/ directory. Provided you have done that the install script will now automatically install this driver and sign the module for you. If you have UEFI secure boot enabled you may be asked to create a password for use with your next reboot.

If you have UEFI secure boot enabled you'll see a screen that looks like this when you reboot:

UEFI Secure Boot

Follow these steps when you see this screen:

  • When you reboot your computer you will see a new menu. Use your arrow keys to navigate to the menu option "Enroll MOK" and press enter.
  • In the "Enroll MOK" menu press "Continue".
  • You will now see a menu that says "Enroll the key(s)?" at the top. Use your arrow keys to select "Yes" and press enter again.
  • You will now be asked to enter the password you created earlier. Type that in and press enter.
  • Now you'll see a menu with three options, the top one will be "Reboot". Use your arrow keys to select that and press enter.

Now that you've rebooted you're ready to setup your Windows VM!

Setting up your first VM

The following section will touch on the process of starting your first GPU accelerated VM with LibVF.IO.

You can begin by copying a template .yaml file from the example folder inside of the LibVF.IO repository. You can see some of the example YAML files below:

Once you've selected an appropriate template yaml file you can copy it to anywhere that's convenient for you to work with. In this example we copied it to our home folder where we also have the Windows installer .ISO file:

Inside the -mdev.yaml file. you should change the maxVRam: and minVRam: values to reflect your desired vGPU guest size. Alternatively you can manually specify your mediated device type with the mdevType: option.

The default values of minVRam: 1000 & maxVRam: 2000 should ideally not be adjusted during setup as larger sizes can sometimes interfere with installation. Once you've finished following this guide and your VM setup is running stable you can incrementally increase these values to find a minVRam & maxVRam size that works well for your workloads and hardware.

Install your Windows guest

Now you'll need to install your base Windows virtual machine.

For example to create a 100 gigabyte VM you can do this using the following command:

$arcd create /path/to/your/yaml/file.yaml /path/to/your/windows.iso 100

Install the guest utils

After you've finished installing your Windows VM you should now shutdown your VM and restart it with the guest utilities installer.

You can start with the attached the guest utilities installer using the following command:

$arcd start /path/to/your/yaml/file.yaml --preinstall

Once you power on Windows make sure to disable the power savings options in Windows which turn off the screen when it's inactive. Now you'll need to install the IVSHMEM driver. Navigate to the CDROM attached to your VM and copy the virtio-win10-prewhql zip file to your desktop then extract it.

Windows will not prompt for a driver for the IVSHMEM device, instead, it will use a default null (do nothing) driver for the device. To install the IVSHMEM driver you will need to go into the device manager and update the driver for the device “PCI standard RAM Controller” under the “System Devices” node. [via https://looking-glass.io/docs/stable/install/#host]

During the above step you'll see two devices named "PCI standard RAM Controller" - make sure you apply the IVSHMEM driver to both.

Next install the Looking Glass Service.

From within the CD-ROM run: looking-glass-host-setup.exe as an administrator.

Finally install the SCREAM audio device.

From within the CD-ROM copy the Scream3.8 zip file to your desktop and then extract in. Open the extracted folder and right click on Install-x64.bat then click "Run as Administrator".

Now go back to the CD-ROM and right click on the file titled scream-ivshmem-reg.bat then click "Run as Administrator" once again.

Before you shutdown your VM ensure that Windows has automatically detected your GPU and installed the appropriate driver version for you. On Windows 10 LTSC using an Nvidia GPU usually if you leave the VM alone for a few minutes after you install it you'll come back to find Windows Update has installed your GPU drivers for you automatically. If it hasn't installed automatically you should find a consumer GPU driver version online which matches or is older than your host GPU driver version. One unmodified Nvidia driver version we tested working you can find here. In some cases such as on AMD GPU devices a version newer on the guest will not be impacted by an older version on the host however with other driver vendors this may not be the case.

Run your VM with full graphics performance

Now that you've finished installing the guest utilities you should change the yaml values to enable Looking Glass & SPICE.

Once you've started your VM make sure you set your Windows audio device to "Speakers (Scream (WDM))".

The create and --preinstall flags will ignore these values as it's assumed IVMSHMEM and Looking Glass have not been installed yet. If you want to set these values back to false for any reason and revert to the default display mode you can change these at any time.

Now that you've updated your .yaml file you can simply run your VM with full graphics performance by using the following command:

$arcd start /path/to/your/yaml/file.yaml

Contributing to LibVF.IO

If you'd like to participate in the development of LibVF.IO you can send us a pull request or submit issues on our GitHub repo. We'd love help right now further improving documentation, setup automation with support for more host operating systems, and automation for setup on Intel GVT-g capable hardware.

Jobs at Arc Compute

We're Hiring Remote & Local Engineers!

Arc Compute is a venture capital funded cloud service provider focused on GPU virtualization tools based here in Toronto Ontario Canada. Currently we're looking for full time virtualization, GPU driver, and kernel engineers. If you're interested in the open source work we're doing above and have expertise in this area then we'd love to work with you!


If you'd like to help us build the open future of GPU virtualization send us an email at:

[email protected]

If you have any comments, recommendations, or questions pertaining to any of the above material my contact information is as follows:

Arthur Rasmusson can be reached at

twitter.com/arcvrarthur on Twitter

and by email at [email protected]