why,

I used to run a proxmox server with windows in a VM that had a GPU via gpu passthrough. Then connected to it via parsec. On my laptop connected to the local network it was pretty good.

magikmw,

My box sits in my closet, so can’t really help much with docker or vm. But I use sunshine server with moonlight client. Keep in mind you can’t fight latency that comes from distance between server and client. I can use 4/5G for turn based or active pause games but wouldn’t try anything real time. On cable my ping is under ms, enough to play shooters as badly as I do these days.

I use AMD for CPU and GPU, and wouldn’t try nvidia if using Linux as sever.

I did use to run a VM in xenserver/xcp-ng and passthrough gpu with a mock hdmi screen plug. A windows 10 vm, ran very well bar pretty crap CPU but I did get around 30fps in 1080p tarkov, sometimes more with amd upscalling. Back then I was using parsec, but found sunshine and moonlight works better for me.

I should also mention I never tried to support multiple users. You can probably play “local” multiplayer with both parsec and moonlight, but any setup that shares one GPU will require some vgpu proprietary fuckery, so easiest is to buy a PC with multiple gpus and assign one to each VM directly.

ftbd,

What kind of machine do you use as a client? And how does performance compare to playing on the server directly?

magikmw,

I use a 2016 Asus Zenbook with integrated intel gpu.

The performance is comparable. The only thing that’s different is latency, obviously, although it’s fairly negligible on LAN, and encoding/decoding sometimes createa artifacts and smudges, but it’s better at higher bandwidth.

noobface, (edited )

If budget is no object it’s only kind of a pain in the ass with Nvidia’s vGPU solutions for data centers. Even with $10 grand spent there’s hypervisor compatibility issues, license servers, compatibility challenges with drivers for games/consumer OS’s on hypervisors, and other inane garbage.

Consumer wise it’s technically the easiest it’s ever been with SRIOV support for hardware accelerating VMs on Intel 13 & 14 gen procs with iGPUs, however iGPU performance is kinda dogshit, drivers are wonky, and multiple display heads being passed through to VMs is weird for hypervisors.

On the docker side of things YMMV based on what you’re trying to accomplish. Technically nvidia container toolkit does support CUDA & display heads for containers: hub.docker.com/r/nvidia/vulkan/tags. I haven’t gotten it working yet, but this is the basis for my next set of experiments.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • ngwrru68w68
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • kavyap
  • cubers
  • megavids
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • JUstTest
  • lostlight
  • All magazines