Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

>Even GUI works as forwarded to ChromeOS. So start xclock and opens.

What is graphics performance like for the guest ? Does this use gpu virtualization (somewhat supported for intel gpus in recent kernels) ?



sort by: page size:

It's hardware-level passthrough with zero performance or compatibility hit. The catch is that the guest needs exclusive access to the device, i.e. you need two GPUs, one for the host and any other guests and one dedicated entirely to the passthrough VM. There's a few applications like Chromium that incorrectly detect the GPU configuration and need manual overrides.

Also, it helps to use a recent AMD card and the in-tree amdgpu driver instead of the out-of-tree nvidia driver.

Overall, you trade software problems for hardware problems (UEFI firmware versions can break the setup), but if you get it working it works great.


> You just have to set it up to fully power down the GPU.

Can you please tell us a bit more?

I have a NVidia + Intel GPU. I was thinking about using the NVidia for KVM (the NVidia would be powered down when not using KVM), and the Intel for native linux display with Pop or Ubuntu 22 LTS as another attempt to use Wayland (since I read NVidia GPUs can cause many issues)


I run it under Xen with a GPU passed through using VT-d. There are dozens of guides for running it in Xen and also many guides for passing GPUs through to Xen guests. No problems to report. The OSX guest has full access to the GPU. Yet to attempt an OS upgrade though.

> without having to care about trying to do complicated things ([...] pci-passthrough of a second GPU)

Actually, GPU passthrough is quite simple: configure host Linux to assign a virtual driver to a secondary GPU on boot and configure QEMU to use that driver on guest boot[1]. If you buy a GPU that is natively supported by macOS (e.g., Sapphire Radeon RX 580 Pulse 4GB), you get 99% performance and 100% stability of a native Mac (not a single macOS/QEMU crash in years for me).

Actually, if you configure your QEMU mouse/keyboard to use evdev[2] (event devices), you can game under macOS/QEMU or Windows/QEMU without any input lag.

[1] https://github.com/kholia/OSX-KVM/blob/master/notes.md#gpu-p... [2] https://passthroughpo.st/using-evdev-passthrough-seamless-vm...


Does graphics acceleration work properly? Things like this exist for Linux already, but unfortunately they aren't as smooth as Chrome OS.

> I might be outdated, but GPU virtualization does not allow you to use your display outputs on your graphic card. So for a workstation it isn't going to work as you would have to use another machine (sure, it could be a thin client) to remote desktop

This may change now with looking Glass giving a fast way to share the frame buffer between host and guest, https://github.com/gnif/LookingGlass


Check out VirGL and the virtio-gpu projects. Is already available in qemu if you're willing to do some work. Virtualized OpenGL (and soon Vulkan?) access to the host GPU inside the VM.

e: my memory slips me, I've already run vulkan apps in qemu.

> [lexi@arch-steam ~]$ vkcube

> Selected GPU 0: Virtio-GPU Venus (Intel(R) UHD Graphics (CML GT2)), type: 1

Granted this was ChromeOS[0], but ultimately it's the same qemu. You just need the right flags and for the guest to have a mesa driver aware of virtio-gpu.

0. https://chromeunboxed.com/how-to-enable-vulkan-crostini/


So am I - Chromium on Arch. i7 Kaby Lake. Out of the box wrt graphics, I have made no changes at all. #ignore-gpu-blacklist is still disabled. I suppose I could get the Nvidia GPU fired up but it isn't needed for this.

60fps.


That seems to imply that it can use the same GPU as the host? I thought a dedicated one was required (not just a 'common use').

I am excited by this because I do use crouton, to launch my normal E17 (E19) desktop, and while it works great, it does not work as fast as E17 using the native framebuffer.

And on my old Samsung Series 3 (XE303C12) the Mali GPU can only service one client at a time, so crouton will never be as fast as E17 would while not sharing cycles with Chrome.

The GPU drivers are distributed as blobs, as far as I can tell, and while they "only support" EGL/GLES and not OpenGL, they also only do so on the kernel version distributed by Google, which means that even if you perform the onerous machinations that I needed to boot with the kernel DRM and put the drivers from Mali in place where your software will find them, and manage to make all of this work with your distribution's version of X11 (or Wayland, I guess maybe?) you are still stuck running a 3.8.11 kernel, which is really ancient from about 10 minor versions ago.

It's a nice puzzle, but I am ready for someone to give me the answer. I have high hopes that with the "Crouton in a Chrome Window" there will be some kind of a pass-through mode that allows Enlightenment to grab what it needs from the existing screen and do its various compositing works in an accelerated mode. (The eternal optimist in me...)


> New GPU based terminal is available to Linux now

What is a "GPU based terminal" ? Does it run directly on the GPU ?


Why? That sounds like even more work, does it provide any benefits? (The guest would need solid graphics acceleration.)

Or if your host uses an intel cpu, just use the integrated gpu for the host so the discrete gpu is free for the guest os.

What guest OS do you use for GPU accelerated Linux guests? I have not been able to get GPU accelerated full res Linux guests working on M1/M2 via UTM.

I chose the system because I had terrible experiences with amd/ati on Linux.

I use bumblebee, acpi_call and bbswitch. with that I can launches processes that use the nvidia chip by pretending a command, for example "optirun ioquake3". that launches a secondary x server and using virtualgl streams the frames to my primary one. works quite well!

edit: darn, replied to the wrong comment.


> This means that one of the major problems with Linux on the desktop for power users goes away, and it also means that we can now deploy Linux only GPU tech such as HIP on any operating system that supports this trick!

If you're brave enough, you can already do that with GPU passthrough. It's possible to detach the entire GPU from the host and transfer it to a guest and then get it back from the guest when the guest shuts down.


It's very close to native performance. Only major issue I had is crackling sound, but it's resolvable. In my setup I have second set of controls (keyboard + mouse) to be forwarded to guest and HDMI output of guest GPU is connected to HDMI switch. Once everything set up, it looks and feels like you have second PC under your desk.

I use conventional Linux KVM virtualization ("libvirtd" + "virt-manager" GUI) on Fedora 32 (previously used Debian).

Most valuable resource is Arch Wiki: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVM...

I've never used Arch Linux, however I always refer to Arch Wiki - it's extremely useful even for other distros.


The first problem is rendering within the guest. If you only have one GPU, then GVT-g [1] virtualizes it with just a bit of overhead. But it's Intel only.

The second problem is getting those pixels onto your screen in the host. SPICE is not as fast as Looking Glass [2], which sets up a shared memory buffer between the host and guest. This has acceptable performance even for modern games.

The OP doesn't seem to utilize these techniques, so I don't think it can plausibly claim to have the fastest configuration - at least not yet.

[1] https://wiki.archlinux.org/title/Intel_GVT-g

[2] https://looking-glass.io


I understand that you have to run their blob driver, but the comment I replied to was complaining about OpenGL performance.
next

Legal | privacy